Sample records for principal component decomposition

  1. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  2. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  3. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  4. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  5. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  6. Constrained Principal Component Analysis: Various Applications.

    ERIC Educational Resources Information Center

    Hunter, Michael; Takane, Yoshio

    2002-01-01

    Provides example applications of constrained principal component analysis (CPCA) that illustrate the method on a variety of contexts common to psychological research. Two new analyses, decompositions into finer components and fitting higher order structures, are presented, followed by an illustration of CPCA on contingency tables and the CPCA of…

  7. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  8. Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.

    ERIC Educational Resources Information Center

    Pham, Tuan Dinh; Mocks, Joachim

    1992-01-01

    Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)

  9. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  10. Effect of noise in principal component analysis with an application to ozone pollution

    NASA Astrophysics Data System (ADS)

    Tsakiri, Katerina G.

    This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction

  11. Influential Observations in Principal Factor Analysis.

    ERIC Educational Resources Information Center

    Tanaka, Yutaka; Odaka, Yoshimasa

    1989-01-01

    A method is proposed for detecting influential observations in iterative principal factor analysis. Theoretical influence functions are derived for two components of the common variance decomposition. The major mathematical tool is the influence function derived by Tanaka (1988). (SLD)

  12. Relationship between the Decomposition Process of Coarse Woody Debris and Fungal Community Structure as Detected by High-Throughput Sequencing in a Deciduous Broad-Leaved Forest in Japan

    PubMed Central

    Yamashita, Satoshi; Masuya, Hayato; Abe, Shin; Masaki, Takashi; Okabe, Kimiko

    2015-01-01

    We examined the relationship between the community structure of wood-decaying fungi, detected by high-throughput sequencing, and the decomposition rate using 13 years of data from a forest dynamics plot. For molecular analysis and wood density measurements, drill dust samples were collected from logs and stumps of Fagus and Quercus in the plot. Regression using a negative exponential model between wood density and time since death revealed that the decomposition rate of Fagus was greater than that of Quercus. The residual between the expected value obtained from the regression curve and the observed wood density was used as a decomposition rate index. Principal component analysis showed that the fungal community compositions of both Fagus and Quercus changed with time since death. Principal component analysis axis scores were used as an index of fungal community composition. A structural equation model for each wood genus was used to assess the effect of fungal community structure traits on the decomposition rate and how the fungal community structure was determined by the traits of coarse woody debris. Results of the structural equation model suggested that the decomposition rate of Fagus was affected by two fungal community composition components: one that was affected by time since death and another that was not affected by the traits of coarse woody debris. In contrast, the decomposition rate of Quercus was not affected by coarse woody debris traits or fungal community structure. These findings suggest that, in the case of Fagus coarse woody debris, the fungal community structure is related to the decomposition process of its host substrate. Because fungal community structure is affected partly by the decay stage and wood density of its substrate, these factors influence each other. Further research on interactive effects is needed to improve our understanding of the relationship between fungal community structure and the woody debris decomposition process. PMID:26110605

  13. Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    BoŻek, Piotr

    2018-03-01

    The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.

  14. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  15. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  16. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  17. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  18. Principal component analysis of Raman spectra for TiO2 nanoparticle characterization

    NASA Astrophysics Data System (ADS)

    Ilie, Alina Georgiana; Scarisoareanu, Monica; Morjan, Ion; Dutu, Elena; Badiceanu, Maria; Mihailescu, Ion

    2017-09-01

    The Raman spectra of anatase/rutile mixed phases of Sn doped TiO2 nanoparticles and undoped TiO2 nanoparticles, synthesised by laser pyrolysis, with nanocrystallite dimensions varying from 8 to 28 nm, was simultaneously processed with a self-written software that applies Principal Component Analysis (PCA) on the measured spectrum to verify the possibility of objective auto-characterization of nanoparticles from their vibrational modes. The photo-excited process of Raman scattering is very sensible to the material characteristics, especially in the case of nanomaterials, where more properties become relevant for the vibrational behaviour. We used PCA, a statistical procedure that performs eigenvalue decomposition of descriptive data covariance, to automatically analyse the sample's measured Raman spectrum, and to interfere the correlation between nanoparticle dimensions, tin and carbon concentration, and their Principal Component values (PCs). This type of application can allow an approximation of the crystallite size, or tin concentration, only by measuring the Raman spectrum of the sample. The study of loadings of the principal components provides information of the way the vibrational modes are affected by the nanoparticle features and the spectral area relevant for the classification.

  19. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  20. Spatial patterns of soil moisture connected to monthly-seasonal precipitation variability in a monsoon region

    Treesearch

    Yongqiang Liu

    2003-01-01

    The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...

  1. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  2. Detection of decomposition volatile organic compounds in soil following removal of remains from a surface deposition site.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-09-01

    Cadaver-detection dogs use volatile organic compounds (VOCs) to search for human remains including those deposited on or beneath soil. Soil can act as a sink for VOCs, causing loading of decomposition VOCs in the soil following soft tissue decomposition. The objective of this study was to chemically profile decomposition VOCs from surface decomposition sites after remains were removed from their primary location. Pig carcasses were used as human analogues and were deposited on a soil surface to decompose for 3 months. The remains were then removed from each site and VOCs were collected from the soil for 7 months thereafter and analyzed by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS). Decomposition VOCs diminished within 6 weeks and hydrocarbons were the most persistent compound class. Decomposition VOCs could still be detected in the soil after 7 months using Principal Component Analysis. This study demonstrated that the decomposition VOC profile, while detectable by GC×GC-TOFMS in the soil, was considerably reduced and altered in composition upon removal of remains. Chemical reference data is provided by this study for future investigations of canine alert behavior in scenarios involving scattered or scavenged remains.

  3. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  4. SaaS Platform for Time Series Data Handling

    NASA Astrophysics Data System (ADS)

    Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail

    2018-02-01

    The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.

  5. COMPADRE: an R and web resource for pathway activity analysis by component decompositions.

    PubMed

    Ramos-Rodriguez, Roberto-Rafael; Cuevas-Diaz-Duran, Raquel; Falciani, Francesco; Tamez-Peña, Jose-Gerardo; Trevino, Victor

    2012-10-15

    The analysis of biological networks has become essential to study functional genomic data. Compadre is a tool to estimate pathway/gene sets activity indexes using sub-matrix decompositions for biological networks analyses. The Compadre pipeline also includes one of the direct uses of activity indexes to detect altered gene sets. For this, the gene expression sub-matrix of a gene set is decomposed into components, which are used to test differences between groups of samples. This procedure is performed with and without differentially expressed genes to decrease false calls. During this process, Compadre also performs an over-representation test. Compadre already implements four decomposition methods [principal component analysis (PCA), Isomaps, independent component analysis (ICA) and non-negative matrix factorization (NMF)], six statistical tests (t- and f-test, SAM, Kruskal-Wallis, Welch and Brown-Forsythe), several gene sets (KEGG, BioCarta, Reactome, GO and MsigDB) and can be easily expanded. Our simulation results shown in Supplementary Information suggest that Compadre detects more pathways than over-representation tools like David, Babelomics and Webgestalt and less false positives than PLAGE. The output is composed of results from decomposition and over-representation analyses providing a more complete biological picture. Examples provided in Supplementary Information show the utility, versatility and simplicity of Compadre for analyses of biological networks. Compadre is freely available at http://bioinformatica.mty.itesm.mx:8080/compadre. The R package is also available at https://sourceforge.net/p/compadre.

  6. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  7. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  8. Respiratory motion correction in dynamic MRI using robust data decomposition registration - application to DCE-MRI.

    PubMed

    Hamy, Valentin; Dikaios, Nikolaos; Punwani, Shonit; Melbourne, Andrew; Latifoltojar, Arash; Makanyanga, Jesica; Chouhan, Manil; Helbren, Emma; Menys, Alex; Taylor, Stuart; Atkinson, David

    2014-02-01

    Motion correction in Dynamic Contrast Enhanced (DCE-) MRI is challenging because rapid intensity changes can compromise common (intensity based) registration algorithms. In this study we introduce a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component. This allows robust separation of motion components that can be registered, from intensity variations that are left unchanged. This Robust Data Decomposition Registration (RDDR) is demonstrated on both simulated and a wide range of clinical data. Robustness to different types of motion and breathing choices during acquisition is demonstrated for a variety of imaged organs including liver, small bowel and prostate. The analysis of clinically relevant regions of interest showed both a decrease of error (15-62% reduction following registration) in tissue time-intensity curves and improved areas under the curve (AUC60) at early enhancement. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  10. Reducing variation in decomposition odour profiling using comprehensive two-dimensional gas chromatography.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-01-01

    Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Strategies for reducing large fMRI data sets for independent component analysis.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R

    2006-06-01

    In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.

  12. [Relationships between decomposition rate of leaf litter and initial quality across the alpine timberline ecotone in Western Sichuan, China].

    PubMed

    Yang, Lin; Deng, Chang-chun; Chen Ya-mei; He, Run-lian; Zhang, Jian; Liu, Yang

    2015-12-01

    The relationships between litter decomposition rate and their initial quality of 14 representative plants in the alpine forest ecotone of western Sichuan were investigated in this paper. The decomposition rate k of the litter ranged from 0.16 to 1.70. Woody leaf litter and moss litter decomposed much slower, and shrubby litter decomposed a little faster. Then, herbaceous litters decomposed fastest among all plant forms. There were significant linear regression relationships between the litter decomposition rate and the N content, lignin content, phenolics content, C/N, C/P and lignin/N. Lignin/N and hemicellulose content could explain 78.4% variation of the litter decomposition rate (k) by path analysis. The lignin/N could explain 69.5% variation of k alone, and the direct path coefficient of lignin/N on k was -0.913. Principal component analysis (PCA) showed that the contribution rate of the first sort axis to k and the decomposition time (t) reached 99.2%. Significant positive correlations existed between lignin/N, lignin content, C/N, C/P and the first sort axis, and the closest relationship existed between lignin/N and the first sort axis (r = 0.923). Lignin/N was the key quality factor affecting plant litter decomposition rate across the alpine timberline ecotone, with the higher the initial lignin/N, the lower the decomposition rate of leaf litter.

  13. Tailored multivariate analysis for modulated enhanced diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less

  14. Spectral decomposition of asteroid Itokawa based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Koga, Sumire C.; Sugita, Seiji; Kamata, Shunichi; Ishiguro, Masateru; Hiroi, Takahiro; Tatsumi, Eri; Sasaki, Sho

    2018-01-01

    The heliocentric stratification of asteroid spectral types may hold important information on the early evolution of the Solar System. Asteroid spectral taxonomy is based largely on principal component analysis. However, how the surface properties of asteroids, such as the composition and age, are projected in the principal-component (PC) space is not understood well. We decompose multi-band disk-resolved visible spectra of the Itokawa surface with principal component analysis (PCA) in comparison with main-belt asteroids. The obtained distribution of Itokawa spectra projected in the PC space of main-belt asteroids follows a linear trend linking the Q-type and S-type regions and is consistent with the results of space-weathering experiments on ordinary chondrites and olivine, suggesting that this trend may be a space-weathering-induced spectral evolution track for S-type asteroids. Comparison with space-weathering experiments also yield a short average surface age (< a few million years) for Itokawa, consistent with the cosmic-ray-exposure time of returned samples from Itokawa. The Itokawa PC score distribution exhibits asymmetry along the evolution track, strongly suggesting that space weathering has begun saturated on this young asteroid. The freshest spectrum found on Itokawa exhibits a clear sign for space weathering, indicating again that space weathering occurs very rapidly on this body. We also conducted PCA on Itokawa spectra alone and compared the results with space-weathering experiments. The obtained results indicate that the first principal component of Itokawa surface spectra is consistent with spectral change due to space weathering and that the spatial variation in the degree of space weathering is very large (a factor of three in surface age), which would strongly suggest the presence of strong regional/local resurfacing process(es) on this small asteroid.

  15. Quality improvement of diagnosis of the electromyography data based on statistical characteristics of the measured signals

    NASA Astrophysics Data System (ADS)

    Selivanova, Karina G.; Avrunin, Oleg G.; Zlepko, Sergii M.; Romanyuk, Sergii O.; Zabolotna, Natalia I.; Kotyra, Andrzej; Komada, Paweł; Smailova, Saule

    2016-09-01

    Research and systematization of motor disorders, taking into account the clinical and neurophysiologic phenomena, are important and actual problem of neurology. The article describes a technique for decomposing surface electromyography (EMG), using Principal Component Analysis. The decomposition is achieved by a set of algorithms that uses a specially developed for analyze EMG. The accuracy was verified by calculation of Mahalanobis distance and Probability error.

  16. A Multi-Dimensional Functional Principal Components Analysis of EEG Data

    PubMed Central

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A.; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-01-01

    Summary The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. PMID:28072468

  17. A multi-dimensional functional principal components analysis of EEG data.

    PubMed

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  18. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    NASA Astrophysics Data System (ADS)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  19. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    PubMed

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  20. SVD-aided pseudo principal-component analysis: A new method to speed up and improve determination of the optimum kinetic model from time-resolved data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oang, Key Young; Yang, Cheolhee; Muniyappan, Srinivasan

    Determination of the optimum kinetic model is an essential prerequisite for characterizing dynamics and mechanism of a reaction. Here, we propose a simple method, termed as singular value decomposition-aided pseudo principal-component analysis (SAPPA), to facilitate determination of the optimum kinetic model from time-resolved data by bypassing any need to examine candidate kinetic models. We demonstrate the wide applicability of SAPPA by examining three different sets of experimental time-resolved data and show that SAPPA can efficiently determine the optimum kinetic model. In addition, the results of SAPPA for both time-resolved X-ray solution scattering (TRXSS) and transient absorption (TA) data of themore » same protein reveal that global structural changes of protein, which is probed by TRXSS, may occur more slowly than local structural changes around the chromophore, which is probed by TA spectroscopy.« less

  1. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  2. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  3. Leaf Litter Mixtures Alter Microbial Community Development: Mechanisms for Non-Additive Effects in Litter Decomposition

    PubMed Central

    Chapman, Samantha K.; Newman, Gregory S.; Hart, Stephen C.; Schweitzer, Jennifer A.; Koch, George W.

    2013-01-01

    To what extent microbial community composition can explain variability in ecosystem processes remains an open question in ecology. Microbial decomposer communities can change during litter decomposition due to biotic interactions and shifting substrate availability. Though relative abundance of decomposers may change due to mixing leaf litter, linking these shifts to the non-additive patterns often recorded in mixed species litter decomposition rates has been elusive, and links community composition to ecosystem function. We extracted phospholipid fatty acids (PLFAs) from single species and mixed species leaf litterbags after 10 and 27 months of decomposition in a mixed conifer forest. Total PLFA concentrations were 70% higher on litter mixtures than single litter types after 10 months, but were only 20% higher after 27 months. Similarly, fungal-to-bacterial ratios differed between mixed and single litter types after 10 months of decomposition, but equalized over time. Microbial community composition, as indicated by principal components analyses, differed due to both litter mixing and stage of litter decomposition. PLFA biomarkers a15∶0 and cy17∶0, which indicate gram-positive and gram-negative bacteria respectively, in particular drove these shifts. Total PLFA correlated significantly with single litter mass loss early in decomposition but not at later stages. We conclude that litter mixing alters microbial community development, which can contribute to synergisms in litter decomposition. These findings advance our understanding of how changing forest biodiversity can alter microbial communities and the ecosystem processes they mediate. PMID:23658639

  4. Tailored multivariate analysis for modulated enhanced diffraction

    DOE PAGES

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni; ...

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. Furthermore, the multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). Furthermore, when applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. In order to develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less

  5. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    PubMed

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  6. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  7. Effects of Increased Summer Precipitation and Nitrogen Addition on Root Decomposition in a Temperate Desert

    PubMed Central

    Zhao, Hongmei; Huang, Gang; Li, Yan; Ma, Jian; Sheng, Jiandong; Jia, Hongtao; Li, Congjuan

    2015-01-01

    Background Climate change scenarios that include precipitation shifts and nitrogen (N) deposition are impacting carbon (C) budgets in arid ecosystems. Roots constitute an important part of the C cycle, but it is still unclear which factors control root mass loss and nutrient release in arid lands. Methodology/Principal Findings Litterbags were used to investigate the decomposition rate and nutrient dynamics in root litter with water and N-addition treatments in the Gurbantunggut Desert in China. Water and N addition had no significant effect on root mass loss and the N and phosphorus content of litter residue. The loss of root litter and nutrient releases were strongly controlled by the initial lignin content and the lignin:N ratio, as evidenced by the negative correlations between decomposition rate and litter lignin content and the lignin:N ratio. Fine roots of Seriphidium santolinum (with higher initial lignin content) had a slower decomposition rate in comparison to coarse roots. Conclusion/Significance Results from this study indicate that small and temporary changes in rainfall and N deposition do not affect root decomposition patterns in the Gurbantunggut Desert. Root decomposition rates were significantly different between species, and also between fine and coarse roots, and were determined by carbon components, especially lignin content, suggesting that root litter quality may be the primary driver of belowground carbon turnover. PMID:26544050

  8. SU-F-J-138: An Extension of PCA-Based Respiratory Deformation Modeling Via Multi-Linear Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less

  9. An algorithm for separation of mixed sparse and Gaussian sources

    PubMed Central

    Akkalkotkar, Ameya

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814

  10. An algorithm for separation of mixed sparse and Gaussian sources.

    PubMed

    Akkalkotkar, Ameya; Brown, Kevin Scott

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.

  11. Fluorescence Intrinsic Characterization of Excitation-Emission Matrix Using Multi-Dimensional Ensemble Empirical Mode Decomposition

    PubMed Central

    Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

    2013-01-01

    Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806

  12. Tensor decomposition-based and principal-component-analysis-based unsupervised feature extraction applied to the gene expression and methylation profiles in the brains of social insects with multiple castes.

    PubMed

    Taguchi, Y-H

    2018-05-08

    Even though coexistence of multiple phenotypes sharing the same genomic background is interesting, it remains incompletely understood. Epigenomic profiles may represent key factors, with unknown contributions to the development of multiple phenotypes, and social-insect castes are a good model for elucidation of the underlying mechanisms. Nonetheless, previous studies have failed to identify genes associated with aberrant gene expression and methylation profiles because of the lack of suitable methodology that can address this problem properly. A recently proposed principal component analysis (PCA)-based and tensor decomposition (TD)-based unsupervised feature extraction (FE) can solve this problem because these two approaches can deal with gene expression and methylation profiles even when a small number of samples is available. PCA-based and TD-based unsupervised FE methods were applied to the analysis of gene expression and methylation profiles in the brains of two social insects, Polistes canadensis and Dinoponera quadriceps. Genes associated with differential expression and methylation between castes were identified, and analysis of enrichment of Gene Ontology terms confirmed reliability of the obtained sets of genes from the biological standpoint. Biologically relevant genes, shown to be associated with significant differential gene expression and methylation between castes, were identified here for the first time. The identification of these genes may help understand the mechanisms underlying epigenetic control of development of multiple phenotypes under the same genomic conditions.

  13. Fine structure of the low-frequency spectra of heart rate and blood pressure

    PubMed Central

    Kuusela, Tom A; Kaila, Timo J; Kähönen, Mika

    2003-01-01

    Background The aim of this study was to explore the principal frequency components of the heart rate and blood pressure variability in the low frequency (LF) and very low frequency (VLF) band. The spectral composition of the R–R interval (RRI) and systolic arterial blood pressure (SAP) in the frequency range below 0.15 Hz were carefully analyzed using three different spectral methods: Fast Fourier transform (FFT), Wigner-Ville distribution (WVD), and autoregression (AR). All spectral methods were used to create time–frequency plots to uncover the principal spectral components that are least dependent on time. The accurate frequencies of these components were calculated from the pole decomposition of the AR spectral density after determining the optimal model order – the most crucial factor when using this method – with the help of FFT and WVD methods. Results Spectral analysis of the RRI and SAP of 12 healthy subjects revealed that there are always at least three spectral components below 0.15 Hz. The three principal frequency components are 0.026 ± 0.003 (mean ± SD) Hz, 0.076 ± 0.012 Hz, and 0.117 ± 0.016 Hz. These principal components vary only slightly over time. FFT-based coherence and phase-function analysis suggests that the second and third components are related to the baroreflex control of blood pressure, since the phase difference between SAP and RRI was negative and almost constant, whereas the origin of the first component is different since no clear SAP–RRI phase relationship was found. Conclusion The above data indicate that spontaneous fluctuations in heart rate and blood pressure within the standard low-frequency range of 0.04–0.15 Hz typically occur at two frequency components rather than only at one as widely believed, and these components are not harmonically related. This new observation in humans can help explain divergent results in the literature concerning spontaneous low-frequency oscillations. It also raises methodological and computational questions regarding the usability and validity of the low-frequency spectral band when estimating sympathetic activity and baroreflex gain. PMID:14552660

  14. Fine structure of the low-frequency spectra of heart rate and blood pressure.

    PubMed

    Kuusela, Tom A; Kaila, Timo J; Kähönen, Mika

    2003-10-13

    The aim of this study was to explore the principal frequency components of the heart rate and blood pressure variability in the low frequency (LF) and very low frequency (VLF) band. The spectral composition of the R-R interval (RRI) and systolic arterial blood pressure (SAP) in the frequency range below 0.15 Hz were carefully analyzed using three different spectral methods: Fast Fourier transform (FFT), Wigner-Ville distribution (WVD), and autoregression (AR). All spectral methods were used to create time-frequency plots to uncover the principal spectral components that are least dependent on time. The accurate frequencies of these components were calculated from the pole decomposition of the AR spectral density after determining the optimal model order--the most crucial factor when using this method--with the help of FFT and WVD methods. Spectral analysis of the RRI and SAP of 12 healthy subjects revealed that there are always at least three spectral components below 0.15 Hz. The three principal frequency components are 0.026 +/- 0.003 (mean +/- SD) Hz, 0.076 +/- 0.012 Hz, and 0.117 +/- 0.016 Hz. These principal components vary only slightly over time. FFT-based coherence and phase-function analysis suggests that the second and third components are related to the baroreflex control of blood pressure, since the phase difference between SAP and RRI was negative and almost constant, whereas the origin of the first component is different since no clear SAP-RRI phase relationship was found. The above data indicate that spontaneous fluctuations in heart rate and blood pressure within the standard low-frequency range of 0.04-0.15 Hz typically occur at two frequency components rather than only at one as widely believed, and these components are not harmonically related. This new observation in humans can help explain divergent results in the literature concerning spontaneous low-frequency oscillations. It also raises methodological and computational questions regarding the usability and validity of the low-frequency spectral band when estimating sympathetic activity and baroreflex gain.

  15. Latent feature decompositions for integrative analysis of multi-platform genomic data

    PubMed Central

    Gregory, Karl B.; Momin, Amin A.; Coombes, Kevin R.; Baladandayuthapani, Veerabhadran

    2015-01-01

    Increased availability of multi-platform genomics data on matched samples has sparked research efforts to discover how diverse molecular features interact both within and between platforms. In addition, simultaneous measurements of genetic and epigenetic characteristics illuminate the roles their complex relationships play in disease progression and outcomes. However, integrative methods for diverse genomics data are faced with the challenges of ultra-high dimensionality and the existence of complex interactions both within and between platforms. We propose a novel modeling framework for integrative analysis based on decompositions of the large number of platform-specific features into a smaller number of latent features. Subsequently we build a predictive model for clinical outcomes accounting for both within- and between-platform interactions based on Bayesian model averaging procedures. Principal components, partial least squares and non-negative matrix factorization as well as sparse counterparts of each are used to define the latent features, and the performance of these decompositions is compared both on real and simulated data. The latent feature interactions are shown to preserve interactions between the original features and not only aid prediction but also allow explicit selection of outcome-related features. The methods are motivated by and applied to, a glioblastoma multiforme dataset from The Cancer Genome Atlas to predict patient survival times integrating gene expression, microRNA, copy number and methylation data. For the glioblastoma data, we find a high concordance between our selected prognostic genes and genes with known associations with glioblastoma. In addition, our model discovers several relevant cross-platform interactions such as copy number variation associated gene dosing and epigenetic regulation through promoter methylation. On simulated data, we show that our proposed method successfully incorporates interactions within and between genomic platforms to aid accurate prediction and variable selection. Our methods perform best when principal components are used to define the latent features. PMID:26146492

  16. Methods to assess an exercise intervention trial based on 3-level functional data.

    PubMed

    Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J

    2015-10-01

    Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  18. Data-driven Analysis and Prediction of Arctic Sea Ice

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.; Ghil, M.; Yuan, X.; Ting, M.

    2015-12-01

    We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales.This approach is applied to monthly time series of leading principal components from the multivariate Empirical Orthogonal Function decomposition of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" forup to 6-months ahead. It will be shown in particular that the memory effects included in our non-Markovian linear MSM models improve predictions of large-amplitude SIC anomalies in certain Arctic regions. Furtherimprovements allowed by the MSM framework will adopt a nonlinear formulation, as well as alternative data-adaptive decompositions.

  19. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  20. Current Source Mapping by Spontaneous MEG and ECoG in Piglets Model

    PubMed Central

    Gao, Lin; Wang, Jue; Stephen, Julia; Zhang, Tongsheng

    2016-01-01

    The previous research reveals the presence of relatively strong spatial correlations from spontaneous activity over cortex in Electroencephalography (EEG) and Magnetoencephalography (MEG) measurement. A critical obstacle in MEG current source mapping is that strong background activity masks the relatively weak local information. In this paper, the hypothesis is that the dominant components of this background activity can be captured by the first Principal Component (PC) after employing Principal Component Analysis (PCA), thus discarding the first PC before the back projection would enhance the exposure of the information carried by a subset of sensors that reflects the local neuronal activity. By detecting MEG signals densely (one measurement per 2×2 mm2) in three piglets neocortical models over an area of 18×26 mm2 with a special shape of lesion by means of a μSQUID, this basic idea was demonstrated by the fact that a strong activity could be imaged in the lesion region after removing the first PC in Delta, Theta and Alpha band, while the original recordings did not show such activity clearly. Thus, the PCA decomposition can be employed to expose the local activity, which is around the lesion in the piglets’ neocortical models, by removing the dominant components of the background activity. PMID:27570537

  1. Rotation of EOFs by the Independent Component Analysis: Towards A Solution of the Mixing Problem in the Decomposition of Geophysical Time Series

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2001-01-01

    The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.

  2. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  3. Co-pyrolysis characteristics and kinetic analysis of organic food waste and plastic.

    PubMed

    Tang, Yijing; Huang, Qunxing; Sun, Kai; Chi, Yong; Yan, Jianhua

    2018-02-01

    In this work, typical organic food waste (soybean protein (SP)) and typical chlorine enriched plastic waste (polyvinyl chloride (PVC)) were chosen as principal MSW components and their interaction during co-pyrolysis was investigated. Results indicate that the interaction accelerated the reaction during co-pyrolysis. The activation energies needed were 2-13% lower for the decomposition of mixture compared with linear calculation while the maximum reaction rates were 12-16% higher than calculation. In the fixed-bed experiments, interaction was observed to reduce the yield of tar by 2-69% and promote the yield of char by 13-39% compared with linear calculation. In addition, 2-6 times more heavy components and 61-93% less nitrogen-containing components were formed for tar derived from mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Feature extraction across individual time series observations with spikes using wavelet principal component analysis.

    PubMed

    Røislien, Jo; Winje, Brita

    2013-09-20

    Clinical studies frequently include repeated measurements of individuals, often for long periods. We present a methodology for extracting common temporal features across a set of individual time series observations. In particular, the methodology explores extreme observations within the time series, such as spikes, as a possible common temporal phenomenon. Wavelet basis functions are attractive in this sense, as they are localized in both time and frequency domains simultaneously, allowing for localized feature extraction from a time-varying signal. We apply wavelet basis function decomposition of individual time series, with corresponding wavelet shrinkage to remove noise. We then extract common temporal features using linear principal component analysis on the wavelet coefficients, before inverse transformation back to the time domain for clinical interpretation. We demonstrate the methodology on a subset of a large fetal activity study aiming to identify temporal patterns in fetal movement (FM) count data in order to explore formal FM counting as a screening tool for identifying fetal compromise and thus preventing adverse birth outcomes. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1997-07-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  6. Insight into the novel inhibition mechanism of apigenin to Pneumolysin by molecular modeling

    NASA Astrophysics Data System (ADS)

    Niu, Xiaodi; Yang, Yanan; Song, Meng; Wang, Guizhen; Sun, Lin; Gao, Yawen; Wang, Hongsu

    2017-11-01

    In this study, the mechanism of apigenin inhibition was explored using molecular modelling, binding energy calculation, and mutagenesis assays. Energy decomposition analysis indicated that apigenin binds in the gap between domains 3 and 4 of PLY. Using principal component analysis, we found that binding of apigenin to PLY weakens the motion of domains 3 and 4. Consequently, these domains cannot complete the transition from monomer to oligomer, thereby blocking oligomerisation of PLY and counteracting its haemolytic activity. This inhibitory mechanism was confirmed by haemolysis assays, and these findings will promote the future development of an antimicrobial agent.

  7. Fault detection, isolation, and diagnosis of self-validating multifunctional sensors.

    PubMed

    Yang, Jing-Li; Chen, Yin-Sheng; Zhang, Li-Li; Sun, Zhen

    2016-06-01

    A novel fault detection, isolation, and diagnosis (FDID) strategy for self-validating multifunctional sensors is presented in this paper. The sparse non-negative matrix factorization-based method can effectively detect faults by using the squared prediction error (SPE) statistic, and the variables contribution plots based on SPE statistic can help to locate and isolate the faulty sensitive units. The complete ensemble empirical mode decomposition is employed to decompose the fault signals to a series of intrinsic mode functions (IMFs) and a residual. The sample entropy (SampEn)-weighted energy values of each IMFs and the residual are estimated to represent the characteristics of the fault signals. Multi-class support vector machine is introduced to identify the fault mode with the purpose of diagnosing status of the faulty sensitive units. The performance of the proposed strategy is compared with other fault detection strategies such as principal component analysis, independent component analysis, and fault diagnosis strategies such as empirical mode decomposition coupled with support vector machine. The proposed strategy is fully evaluated in a real self-validating multifunctional sensors experimental system, and the experimental results demonstrate that the proposed strategy provides an excellent solution to the FDID research topic of self-validating multifunctional sensors.

  8. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  9. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  10. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  11. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  12. Development of a ReaxFF reactive force field for ammonium nitrate and application to shock compression and thermal decomposition.

    PubMed

    Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P

    2014-02-27

    We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.

  13. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  14. Enhanced decomposition of stable soil organic carbon and microbial catabolic potentials by long-term field warming

    DOE PAGES

    Feng, Wenting; Liang, Junyi; Hale, Lauren E.; ...

    2017-06-09

    Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less

  15. Enhanced decomposition of stable soil organic carbon and microbial catabolic potentials by long-term field warming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Wenting; Liang, Junyi; Hale, Lauren E.

    Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon–climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming.more » Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change.« less

  16. Enhanced decomposition of stable soil organic carbon and microbial catabolic potentials by long-term field warming.

    PubMed

    Feng, Wenting; Liang, Junyi; Hale, Lauren E; Jung, Chang Gyo; Chen, Ji; Zhou, Jizhong; Xu, Minggang; Yuan, Mengting; Wu, Liyou; Bracho, Rosvel; Pegoraro, Elaine; Schuur, Edward A G; Luo, Yiqi

    2017-11-01

    Quantifying soil organic carbon (SOC) decomposition under warming is critical to predict carbon-climate feedbacks. According to the substrate regulating principle, SOC decomposition would decrease as labile SOC declines under field warming, but observations of SOC decomposition under warming do not always support this prediction. This discrepancy could result from varying changes in SOC components and soil microbial communities under warming. This study aimed to determine the decomposition of SOC components with different turnover times after subjected to long-term field warming and/or root exclusion to limit C input, and to test whether SOC decomposition is driven by substrate lability under warming. Taking advantage of a 12-year field warming experiment in a prairie, we assessed the decomposition of SOC components by incubating soils from control and warmed plots, with and without root exclusion for 3 years. We assayed SOC decomposition from these incubations by combining inverse modeling and microbial functional genes during decomposition with a metagenomic technique (GeoChip). The decomposition of SOC components with turnover times of years and decades, which contributed to 95% of total cumulative CO 2 respiration, was greater in soils from warmed plots. But the decomposition of labile SOC was similar in warmed plots compared to the control. The diversity of C-degradation microbial genes generally declined with time during the incubation in all treatments, suggesting shifts of microbial functional groups as substrate composition was changing. Compared to the control, soils from warmed plots showed significant increase in the signal intensities of microbial genes involved in degrading complex organic compounds, implying enhanced potential abilities of microbial catabolism. These are likely responsible for accelerated decomposition of SOC components with slow turnover rates. Overall, the shifted microbial community induced by long-term warming accelerates the decomposition of SOC components with slow turnover rates and thus amplify the positive feedback to climate change. © 2017 John Wiley & Sons Ltd.

  17. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  18. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities.

    PubMed

    Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.

  19. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  20. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  1. [A Feature Extraction Method for Brain Computer Interface Based on Multivariate Empirical Mode Decomposition].

    PubMed

    Wang, Jinjia; Liu, Yuan

    2015-04-01

    This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.

  2. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  3. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  4. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  5. Nature of Driving Force for Protein Folding-- A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1998-03-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan (MJ) derived a 20× 20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the MJ matrix can be accurately reconstructed from its first two principal component vectors as M_ij=C_0+C_1(q_i+q_j)+C2 qi q_j, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  6. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.

  7. Background recovery via motion-based robust principal component analysis with matrix factorization

    NASA Astrophysics Data System (ADS)

    Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping

    2018-03-01

    Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.

  8. Chromospheric umbral dynamics

    NASA Astrophysics Data System (ADS)

    Reardon, Kevin P.; Vecchio, Antonio; Cauzzi, Gianna; Tritschler, Alexandra

    2014-06-01

    The chromosphere above sunspots is seen to undergo dynamical driving from perturbations from lower layers of the atmosphere. Umbral flashes have long been understood to be the result of acoustic shocks due to the drop in density in the sunspot chromosphere. Detailed observations of the umbral waves and flashes may help reveal the nature of the sunspot structure in the upper atmosphere. We report on high-resolution observations of umbral dynamics observed in the Ca II 8542 line by IBIS at the Dunn Solar Telescope. We use a principal component decomposition technique (POD) to isolate different components of the observed oscillations. We are able to explore temporal and spatial evolution of the umbral flashes. We find significant variation in the nature of the flashes over the sunspot, indicating that the chromospheric magnetic topology can strongly modify the nature of the umbral intensity and velocity oscillations.

  9. Subject order-independent group ICA (SOI-GICA) for functional MRI data analysis.

    PubMed

    Zhang, Han; Zuo, Xi-Nian; Ma, Shuang-Ye; Zang, Yu-Feng; Milham, Michael P; Zhu, Chao-Zhe

    2010-07-15

    Independent component analysis (ICA) is a data-driven approach to study functional magnetic resonance imaging (fMRI) data. Particularly, for group analysis on multiple subjects, temporally concatenation group ICA (TC-GICA) is intensively used. However, due to the usually limited computational capability, data reduction with principal component analysis (PCA: a standard preprocessing step of ICA decomposition) is difficult to achieve for a large dataset. To overcome this, TC-GICA employs multiple-stage PCA data reduction. Such multiple-stage PCA data reduction, however, leads to variable outputs due to different subject concatenation orders. Consequently, the ICA algorithm uses the variable multiple-stage PCA outputs and generates variable decompositions. In this study, a rigorous theoretical analysis was conducted to prove the existence of such variability. Simulated and real fMRI experiments were used to demonstrate the subject-order-induced variability of TC-GICA results using multiple PCA data reductions. To solve this problem, we propose a new subject order-independent group ICA (SOI-GICA). Both simulated and real fMRI data experiments demonstrated the high robustness and accuracy of the SOI-GICA results compared to those of traditional TC-GICA. Accordingly, we recommend SOI-GICA for group ICA-based fMRI studies, especially those with large data sets. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Distinguishing autofluorescence of normal, benign, and cancerous breast tissues through wavelet domain correlation studies.

    PubMed

    Gharekhan, Anita H; Arora, Siddharth; Oza, Ashok N; Sureshkumar, Mundan B; Pradhan, Asima; Panigrahi, Prasanta K

    2011-08-01

    Using the multiresolution ability of wavelets and effectiveness of singular value decomposition (SVD) to identify statistically robust parameters, we find a number of local and global features, capturing spectral correlations in the co- and cross-polarized channels, at different scales (of human breast tissues). The copolarized component, being sensitive to intrinsic fluorescence, shows different behavior for normal, benign, and cancerous tissues, in the emission domain of known fluorophores, whereas the perpendicular component, being more prone to the diffusive effect of scattering, points out differences in the Kernel-Smoother density estimate employed to the principal components, between malignant, normal, and benign tissues. The eigenvectors, corresponding to the dominant eigenvalues of the correlation matrix in SVD, also exhibit significant differences between the three tissue types, which clearly reflects the differences in the spectral correlation behavior. Interestingly, the most significant distinguishing feature manifests in the perpendicular component, corresponding to porphyrin emission range in the cancerous tissue. The fact that perpendicular component is strongly influenced by depolarization, and porphyrin emissions in cancerous tissue has been found to be strongly depolarized, may be the possible cause of the above observation.

  11. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  12. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  13. Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization

    PubMed Central

    Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos

    2015-01-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684

  14. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities

    PubMed Central

    Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658

  15. Assessing the Relative Effects of Geographic Location and Soil Type on Microbial Communities Associated with Straw Decomposition

    PubMed Central

    Wang, Xiaoyue; Wang, Feng; Jiang, Yuji

    2013-01-01

    Decomposition of plant residues is largely mediated by soil-dwelling microorganisms whose activities are influenced by both climate conditions and properties of the soil. However, a comprehensive understanding of their relative importance remains elusive, mainly because traditional methods, such as soil incubation and environmental surveys, have a limited ability to differentiate between the combined effects of climate and soil. Here, we performed a large-scale reciprocal soil transplantation experiment, whereby microbial communities associated with straw decomposition were examined in three initially identical soils placed in parallel in three climate regions of China (red soil, Chao soil, and black soil, located in midsubtropical, warm-temperate, and cold-temperate zones). Maize straws buried in mesh bags were sampled at 0.5, 1, and 2 years after the burial and subjected to chemical, physical, and microbiological analyses, e.g., phospholipid fatty acid analysis for microbial abundance, community-level physiological profiling, and 16S rRNA gene denaturing gradient gel electrophoresis, respectively, for functional and phylogenic diversity. Results of aggregated boosted tree analysis show that location rather soil is the primary determining factor for the rate of straw decomposition and structures of the associated microbial communities. Principal component analysis indicates that the straw communities are primarily grouped by location at any of the three time points. In contrast, microbial communities in bulk soil remained closely related to one another for each soil. Together, our data suggest that climate (specifically, geographic location) has stronger effects than soil on straw decomposition; moreover, the successive process of microbial communities in soils is slower than those in straw residues in response to climate changes. PMID:23524671

  16. Decomposition of coarse woody debris originating by clearcutting of an old-growth conifer forest

    Treesearch

    Jack E. Janisch; Mark E. Harmon; Hua Chen; Becky Fasth; Jay Sexton

    2005-01-01

    Decomposition constants (k) for aboveground logs and stumps and subsurface coarse roots originating from harvested old-growth forest (estimated age 400 to 600 y) were assessed by volume-density change methods along a 70-y chronosequence of clearcuts on the Wind River Ranger District, Washington, USA. Principal species sampled were Tsuga heterophylla...

  17. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  18. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    NASA Astrophysics Data System (ADS)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  19. Model based approach to UXO imaging using the time domain electromagnetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less

  20. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  1. Two biased estimation techniques in linear regression: Application to aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  2. Complex numbers in chemometrics: examples from multivariate impedance measurements on lipid monolayers.

    PubMed

    Geladi, Paul; Nelson, Andrew; Lindholm-Sethson, Britta

    2007-07-09

    Electrical impedance gives multivariate complex number data as results. Two examples of multivariate electrical impedance data measured on lipid monolayers in different solutions give rise to matrices (16x50 and 38x50) of complex numbers. Multivariate data analysis by principal component analysis (PCA) or singular value decomposition (SVD) can be used for complex data and the necessary equations are given. The scores and loadings obtained are vectors of complex numbers. It is shown that the complex number PCA and SVD are better at concentrating information in a few components than the naïve juxtaposition method and that Argand diagrams can replace score and loading plots. Different concentrations of Magainin and Gramicidin A give different responses and also the role of the electrolyte medium can be studied. An interaction of Gramicidin A in the solution with the monolayer over time can be observed.

  3. SandiaMRCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-01-05

    SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less

  4. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  5. JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.

    PubMed

    Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B

    2013-03-01

    Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.

  6. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    NASA Astrophysics Data System (ADS)

    Benner, Ronald; Hatcher, Patrick G.; Hedges, John I.

    1990-07-01

    Changes in the chemical composition of mangrove ( Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed.

  7. Early diagenesis of mangrove leaves in a tropical estuary: Bulk chemical characterization using solid-state 13C NMR and elemental analyses

    USGS Publications Warehouse

    Benner, R.; Hatcher, P.G.; Hedges, J.I.

    1990-01-01

    Changes in the chemical composition of mangrove (Rhizophora mangle) leaves during decomposition in tropical estuarine waters were characterized using solid-state 13C nuclear magnetic resonance (NMR) and elemental (CHNO) analysis. Carbohydrates were the most abundant components of the leaves accounting for about 50 wt% of senescent tissues. Tannins were estimated to account for about 20 wt% of leaf tissues, and lipid components, cutin, and possibly other aliphatic biopolymers in leaf cuticles accounted for about 15 wt%. Carbohydrates were generally less resistant to decomposition than the other constituents and decreased in relative concentration during decomposition. Tannins were of intermediate resistance to decomposition and remained in fairly constant proportion during decomposition. Paraffinic components were very resistant to decomposition and increased in relative concentration as decomposition progressed. Lignin was a minor component of all leaf tissues. Standard methods for the colorimetric determination of tannins (Folin-Dennis reagent) and the gravimetric determination of lignin (Klason lignin) were highly inaccurate when applied to mangrove leaves. The N content of the leaves was particularly dynamic with values ranging from 1.27 wt% in green leaves to 0.65 wt% in senescent yellow leaves attached to trees. During decomposition in the water the N content initially decreased to 0.51 wt% due to leaching, but values steadily increased thereafter to 1.07 wt% in the most degraded leaf samples. The absolute mass of N in the leaves increased during decomposition indicating that N immobilization was occurring as decomposition progressed. ?? 1990.

  8. Fungal succession in relation to volatile organic compounds emissions from Scots pine and Norway spruce leaf litter-decomposing fungi

    NASA Astrophysics Data System (ADS)

    Isidorov, Valery; Tyszkiewicz, Zofia; Pirożnikow, Ewa

    2016-04-01

    Leaf litter fungi are partly responsible for decomposition of dead material, nutrient mobilization and gas fluxes in forest ecosystems. It can be assumed that microbial destruction of dead plant materials is an important source of volatile organic compounds (VOCs) emitted into the atmosphere from terrestrial ecosystems. However, little information is available on both the composition of fungal VOCs and their producers whose community can be changed at different stages of litter decomposition. The fungal community succession was investigated in a litter bag experiment with Scots pine (Pinus sylvestris) and Norway spruce (Picea abies) needle litter. The succession process can be divided into a several stages controlled mostly by changes in litter quality. At the very first stages of decomposition the needle litter was colonized by ascomycetes which can use readily available carbohydrates. At the later stages, the predominance of Trichoderma sp., the known producers of cellulolytic enzymes, was documented. To investigate the fungi-derived VOCs, eight fungi species were isolated. As a result of gas chromatographic analyses, as many as 75C2sbnd C15 fungal volatile compounds were identified. Most components detected in emissions were very reactive substances: the principal groups of VOCs were formed by monoterpenes, carbonyl compounds and aliphatic alcohols. It was found that production of VOCs by fungi is species specific: only 10 metabolites were emitted into the gas phase by all eight species. The reported data confirm that the leave litter decomposition is important source of reactive organic compounds under the forest canopy.

  9. Empirical Orthogonal Function (EOF) Analysis of Storm-Time GPS Total Electron Content Variations

    NASA Astrophysics Data System (ADS)

    Thomas, E. G.; Coster, A. J.; Zhang, S.; McGranaghan, R. M.; Shepherd, S. G.; Baker, J. B.; Ruohoniemi, J. M.

    2016-12-01

    Large perturbations in ionospheric density are known to occur during geomagnetic storms triggered by dynamic structures in the solar wind. These ionospheric storm effects have long attracted interest due to their impact on the propagation characteristics of radio wave communications. Over the last two decades, maps of vertically-integrated total electron content (TEC) based on data collected by worldwide networks of Global Positioning System (GPS) receivers have dramatically improved our ability to monitor the spatiotemporal dynamics of prominent storm-time features such as polar cap patches and storm enhanced density (SED) plumes. In this study, we use an empirical orthogonal function (EOF) decomposition technique to identify the primary modes of spatial and temporal variability in the storm-time GPS TEC response at midlatitudes over North America during more than 100 moderate geomagnetic storms from 2001-2013. We next examine the resulting time-varying principal components and their correlation with various geophysical indices and parameters in order to derive an analytical representation. Finally, we use a truncated reconstruction of the EOF basis functions and parameterization of the principal components to produce an empirical representation of the geomagnetic storm-time response of GPS TEC for all magnetic local times local times and seasons at midlatitudes in the North American sector.

  10. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases.

    PubMed

    Sengur, Abdulkadir

    2008-03-01

    In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.

  11. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, A; Bouchard, H

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less

  12. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linkins, A.E.

    1992-01-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  13. Temporal Associations between Weather and Headache: Analysis by Empirical Mode Decomposition

    PubMed Central

    Yang, Albert C.; Fuh, Jong-Ling; Huang, Norden E.; Shia, Ben-Chang; Peng, Chung-Kang; Wang, Shuu-Jiun

    2011-01-01

    Background Patients frequently report that weather changes trigger headache or worsen existing headache symptoms. Recently, the method of empirical mode decomposition (EMD) has been used to delineate temporal relationships in certain diseases, and we applied this technique to identify intrinsic weather components associated with headache incidence data derived from a large-scale epidemiological survey of headache in the Greater Taipei area. Methodology/Principal Findings The study sample consisted of 52 randomly selected headache patients. The weather time-series parameters were detrended by the EMD method into a set of embedded oscillatory components, i.e. intrinsic mode functions (IMFs). Multiple linear regression models with forward stepwise methods were used to analyze the temporal associations between weather and headaches. We found no associations between the raw time series of weather variables and headache incidence. For decomposed intrinsic weather IMFs, temperature, sunshine duration, humidity, pressure, and maximal wind speed were associated with headache incidence during the cold period, whereas only maximal wind speed was associated during the warm period. In analyses examining all significant weather variables, IMFs derived from temperature and sunshine duration data accounted for up to 33.3% of the variance in headache incidence during the cold period. The association of headache incidence and weather IMFs in the cold period coincided with the cold fronts. Conclusions/Significance Using EMD analysis, we found a significant association between headache and intrinsic weather components, which was not detected by direct comparisons of raw weather data. Contributing weather parameters may vary in different geographic regions and different seasons. PMID:21297940

  14. Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.

    PubMed

    Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos

    2015-03-01

    In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  16. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  17. Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linkins, A.E.

    1992-09-01

    Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.

  18. Directional analysis of cardiac motion field from gated fluorodeoxyglucose PET images using the Discrete Helmholtz Hodge Decomposition.

    PubMed

    Sims, J A; Giorgi, M C; Oliveira, M A; Meneghetti, J C; Gutierrez, M A

    2018-04-01

    Extract directional information related to left ventricular (LV) rotation and torsion from a 4D PET motion field using the Discrete Helmholtz Hodge Decomposition (DHHD). Synthetic motion fields were created using superposition of rotational and radial field components and cardiac fields produced using optical flow from a control and patient image. These were decomposed into curl-free (CF) and divergence-free (DF) components using the DHHD. Synthetic radial components were present in the CF field and synthetic rotational components in the DF field, with each retaining its center position, direction of motion and diameter after decomposition. Direction of rotation at apex and base for the control field were in opposite directions during systole, reversing during diastole. The patient DF field had little overall rotation with several small rotators. The decomposition of the LV motion field into directional components could assist quantification of LV torsion, but further processing stages seem necessary. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  20. Single-Input and Multiple-Output Surface Acoustic Wave Sensing for Damage Quantification in Piezoelectric Sensors.

    PubMed

    Pamwani, Lavish; Habib, Anowarul; Melandsø, Frank; Ahluwalia, Balpreet Singh; Shelke, Amit

    2018-06-22

    The main aim of the paper is damage detection at the microscale in the anisotropic piezoelectric sensors using surface acoustic waves (SAWs). A novel technique based on the single input and multiple output of Rayleigh waves is proposed to detect the microscale cracks/flaws in the sensor. A convex-shaped interdigital transducer is fabricated for excitation of divergent SAWs in the sensor. An angularly shaped interdigital transducer (IDT) is fabricated at 0 degrees and ±20 degrees for sensing the convex shape evolution of SAWs. A precalibrated damage was introduced in the piezoelectric sensor material using a micro-indenter in the direction perpendicular to the pointing direction of the SAW. Damage detection algorithms based on empirical mode decomposition (EMD) and principal component analysis (PCA) are implemented to quantify the evolution of damage in piezoelectric sensor material. The evolution of the damage was quantified using a proposed condition indicator (CI) based on normalized Euclidean norm of the change in principal angles, corresponding to pristine and damaged states. The CI indicator provides a robust and accurate metric for detection and quantification of damage.

  1. Factors regulating carbon sinks in mangrove ecosystems.

    PubMed

    Li, Shi-Bo; Chen, Po-Hung; Huang, Jih-Sheng; Hsueh, Mei-Li; Hsieh, Li-Yung; Lee, Chen-Lu; Lin, Hsing-Juh

    2018-05-23

    Mangroves are recognized as one of the richest carbon storage systems. However, the factors regulating carbon sinks in mangrove ecosystems are still unclear, particularly in the subtropical mangroves. The biomass, production, litterfall, detrital export and decomposition of the dominant mangrove vegetation in subtropical (Kandelia obovata) and tropical (Avicennia marina) Taiwan were quantified from October 2011 to July 2014 to construct the carbon budgets. Despite the different tree species, a principal component analysis revealed the site or environmental conditions had a greater influence than the tree species on the carbon processes. For both species, the net production (NP) rates ranged from 10.86 to 27.64 Mg C ha -1  year -1 and were higher than the global average rate due to the high tree density. While most of the litterfall remained on the ground, a high percentage (72%-91%) of the ground litter decomposed within 1 year and fluxed out of the mangroves. However, human activities might cause a carbon flux into the mangroves and a lower NP rate. The rates of the organic carbon export and soil heterotrophic respiration were greater than the global mean values and those at other locations. Only a small percentage (3%-12%) of the NP was stored in the sediment. The carbon burial rates were much lower than the global average rate due to their faster decomposition, indicating that decomposition played a critical role in determining the burial rate in the sediment. The summation of the organic and inorganic carbon fluxes and soil heterotrophic respiration well exceeded the amount of litter decomposition, indicating an additional source of organic carbon that was unaccounted for by decomposition in the sediment. Sediment-stable isotope analyses further suggest that the trapping of organic matter from upstream rivers or adjacent waters contributed more to the mangrove carbon sinks than the actual production of the mangrove trees. © 2018 John Wiley & Sons Ltd.

  2. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  3. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    NASA Astrophysics Data System (ADS)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  4. Facilitating Neuronal Connectivity Analysis of Evoked Responses by Exposing Local Activity with Principal Component Analysis Preprocessing: Simulation of Evoked MEG

    PubMed Central

    Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia

    2014-01-01

    When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data. PMID:22918837

  5. Facilitating neuronal connectivity analysis of evoked responses by exposing local activity with principal component analysis preprocessing: simulation of evoked MEG.

    PubMed

    Gao, Lin; Zhang, Tongsheng; Wang, Jue; Stephen, Julia

    2013-04-01

    When connectivity analysis is carried out for event related EEG and MEG, the presence of strong spatial correlations from spontaneous activity in background may mask the local neuronal evoked activity and lead to spurious connections. In this paper, we hypothesized PCA decomposition could be used to diminish the background activity and further improve the performance of connectivity analysis in event related experiments. The idea was tested using simulation, where we found that for the 306-channel Elekta Neuromag system, the first 4 PCs represent the dominant background activity, and the source connectivity pattern after preprocessing is consistent with the true connectivity pattern designed in the simulation. Improving signal to noise of the evoked responses by discarding the first few PCs demonstrates increased coherences at major physiological frequency bands when removing the first few PCs. Furthermore, the evoked information was maintained after PCA preprocessing. In conclusion, it is demonstrated that the first few PCs represent background activity, and PCA decomposition can be employed to remove it to expose the evoked activity for the channels under investigation. Therefore, PCA can be applied as a preprocessing approach to improve neuronal connectivity analysis for event related data.

  6. Low-dimensional representation of near-wall dynamics in shear flows, with implications to wall-models.

    PubMed

    Schmid, P J; Sayadi, T

    2017-03-13

    The dynamics of coherent structures near the wall of a turbulent boundary layer is investigated with the aim of a low-dimensional representation of its essential features. Based on a triple decomposition into mean, coherent and incoherent motion and a dynamic mode decomposition to recover statistical information about the incoherent part of the flow field, a driven linear system coupling first- and second-order moments of the coherent structures is derived and analysed. The transfer function for this system, evaluated for a wall-parallel plane, confirms a strong bias towards streamwise elongated structures, and is proposed as an 'impedance' boundary condition which replaces the bulk of the transport between the coherent velocity field and the coherent Reynolds stresses, thus acting as a wall model for large-eddy simulations (LES). It is interesting to note that the boundary condition is non-local in space and time. The extracted model is capable of reproducing the principal Reynolds stress components for the pretransitional, transitional and fully turbulent boundary layer.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  7. Understanding determinants of unequal distribution of stillbirth in Tehran, Iran: a concentration index decomposition approach.

    PubMed

    Almasi-Hashiani, Amir; Sepidarkish, Mahdi; Safiri, Saeid; Khedmati Morasae, Esmaeil; Shadi, Yahya; Omani-Samani, Reza

    2017-05-17

    The present inquiry set to determine the economic inequality in history of stillbirth and understanding determinants of unequal distribution of stillbirth in Tehran, Iran. A population-based cross-sectional study was conducted on 5170 pregnancies in Tehran, Iran, since 2015. Principal component analysis (PCA) was applied to measure the asset-based economic status. Concentration index was used to measure socioeconomic inequality in stillbirth and then decomposed into its determinants. The concentration index and its 95% CI for stillbirth was -0.121 (-0.235 to -0.002). Decomposition of the concentration index showed that mother's education (50%), mother's occupation (30%), economic status (26%) and father's age (12%) had the highest positive contributions to measured inequality in stillbirth history in Tehran. Mother's age (17%) had the highest negative contribution to inequality. Stillbirth is unequally distributed among Iranian women and is mostly concentrated among low economic status people. Mother-related factors had the highest positive and negative contributions to inequality, highlighting specific interventions for mothers to redress inequality. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Inheritance of dermatoglyphic asymmetry and diversity traits in twins based on factor: variance decomposition analysis.

    PubMed

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2013-06-01

    Dermatoglyphic asymmetry and diversity traits from a large number of twins (MZ and DZ) were analyzed based on principal factors to evaluate genetic effects and common familial environmental influences on twin data by the use of maximum likelihood-based Variance decomposition analysis. Sample consists of monozygotic (MZ) twins of two sexes (102 male pairs and 138 female pairs) and 120 pairs of dizygotic (DZ) female twins. All asymmetry (DA and FA) and diversity of dermatoglyphic traits were clearly separated into factors. These are perfectly corroborated with the earlier studies in different ethnic populations, which indicate a common biological validity perhaps exists of the underlying component structures of dermatoglyphic characters. Our heritability result in twins clearly showed that DA_F2 is inherited mostly in dominant type (28.0%) and FA_F1 is additive (60.7%), but no significant difference in sexes was observed for these factors. Inheritance is also very prominent in diversity Factor 1, which is exactly corroborated with our previous findings. The present results are similar with the earlier results of finger ridge count diversity in twin data, which suggested that finger ridge count diversity is under genetic control.

  9. Socioeconomic Inequality in Childhood Obesity.

    PubMed

    Moradi, Ghobad; Mostafavi, Farideh; Azadi, Namamali; Esmaeilnasab, Nader; Ghaderi, Ebrahim

    2017-08-15

    The aim of this study was to assess the socioeconomic inequalities in obesity and overweight in children aged 10 to 12 yr old. A cross-sectional study. This study was conducted on 2506 children aged 10 to 12 yr old in the city of Sanandaj, western Iran in 2015. Body mass index (BMI) was calculated. Considering household situation and assets, socioeconomic status (SES) of the subjects was determined using Principal Component Analysis (PCA). Concentration Index was used to measure inequality and Oaxaca decomposition was used to determine the share of different determinants of inequality. The prevalence of overweight was 24.1% (95% CI: 22.4, 25.7). 11.5% (95% CI: 10.0, 12.0) were obese. The concentration index for overweight and obesity, respectively, was 0.10 (95% CI: 0.05, 0.15), and 0.07 (95% CI:0.00, 0.14) which indicated inequality and a higher prevalence of obesity and overweight in higher SES. The results of Oaxaca decomposition suggested that socioeconomic factors accounted for 75.8% of existing inequalities. Residential area and mother education were the most important causes of inequality. To reduce inequalities in childhood obesity, mother education must be promoted and special attention must be paid to residential areas and children gender.

  10. Quantitative assessment in thermal image segmentation for artistic objects

    NASA Astrophysics Data System (ADS)

    Yousefi, Bardia; Sfarra, Stefano; Maldague, Xavier P. V.

    2017-07-01

    The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve Non-destructive Testing (NDT), Medical analysis (Computer Aid Diagnosis/Detection- CAD), Arts and Archaeology among many others. In the arts and archaeology field, infrared technology provides significant contributions in term of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard Non-Negative Matrix Factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by Non-negative least squares (NNLS) active-set algorithm (SNMF2) and eigen decomposition approaches such as Principal Component Thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT) to obtain the thermal features. On one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet based data fusion combines the data of each method with PCT to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue and a fresco were analyzed using the above mentioned methods and interesting results were obtained.

  11. Risk prediction for myocardial infarction via generalized functional regression models.

    PubMed

    Ieva, Francesca; Paganoni, Anna M

    2016-08-01

    In this paper, we propose a generalized functional linear regression model for a binary outcome indicating the presence/absence of a cardiac disease with multivariate functional data among the relevant predictors. In particular, the motivating aim is the analysis of electrocardiographic traces of patients whose pre-hospital electrocardiogram (ECG) has been sent to 118 Dispatch Center of Milan (the Italian free-toll number for emergencies) by life support personnel of the basic rescue units. The statistical analysis starts with a preprocessing of ECGs treated as multivariate functional data. The signals are reconstructed from noisy observations. The biological variability is then removed by a nonlinear registration procedure based on landmarks. Thus, in order to perform a data-driven dimensional reduction, a multivariate functional principal component analysis is carried out on the variance-covariance matrix of the reconstructed and registered ECGs and their first derivatives. We use the scores of the Principal Components decomposition as covariates in a generalized linear model to predict the presence of the disease in a new patient. Hence, a new semi-automatic diagnostic procedure is proposed to estimate the risk of infarction (in the case of interest, the probability of being affected by Left Bundle Brunch Block). The performance of this classification method is evaluated and compared with other methods proposed in literature. Finally, the robustness of the procedure is checked via leave-j-out techniques. © The Author(s) 2013.

  12. Principal component analysis of MSBAS DInSAR time series from Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Tiampo, Kristy F.; González, Pablo J.; Samsonov, Sergey; Fernández, Jose; Camacho, Antonio

    2017-09-01

    Because of its proximity to the city of Naples and with a population of nearly 1 million people within its caldera, Campi Flegrei is one of the highest risk volcanic areas in the world. Since the last major eruption in 1538, the caldera has undergone frequent episodes of ground subsidence and uplift accompanied by seismic activity that has been interpreted as the result of a stationary, deeper source below the caldera that feeds shallower eruptions. However, the location and depth of the deeper source is not well-characterized and its relationship to current activity is poorly understood. Recently, a significant increase in the uplift rate has occurred, resulting in almost 13 cm of uplift by 2013 (De Martino et al., 2014; Samsonov et al., 2014b; Di Vito et al., 2016). Here we apply a principal component decomposition to high resolution time series from the region produced by the advanced Multidimensional SBAS DInSAR technique in order to better delineate both the deeper source and the recent shallow activity. We analyzed both a period of substantial subsidence (1993-1999) and a second of significant uplift (2007-2013) and inverted the associated vertical surface displacement for the most likely source models. Results suggest that the underlying dynamics of the caldera changed in the late 1990s, from one in which the primary signal arises from a shallow deflating source above a deeper, expanding source to one dominated by a shallow inflating source. In general, the shallow source lies between 2700 and 3400 m below the caldera while the deeper source lies at 7600 m or more in depth. The combination of principal component analysis with high resolution MSBAS time series data allows for these new insights and confirms the applicability of both to areas at risk from dynamic natural hazards.

  13. Gas hydrate characterization from a 3D seismic dataset in the deepwater eastern Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Daniel; Haneberg, William C.

    Principal component analysis of spectral decomposition results combined with amplitude and frequency seismic attributes derived from 3D seismic data are used for the identification and characterization of gas hydrate deposits in the deepwater eastern Gulf of Mexico. In the central deepwater Gulf of Mexico (GoM), logging while drilling LWD data provided insight to the amplitude response of gas hydrate saturation in sands, which could be used to characterize complex gas hydrate deposits in other sandy deposits. In this study, a large 3D seismic data set from equivalent and distal Plio Pleistocene sandy channel deposits in the deepwater eastern Gulf ofmore » Mexico is screened for direct hydrocarbon indicators for gas hydrate saturated sands.« less

  14. A unification of mediation and interaction: a four-way decomposition

    PubMed Central

    VanderWeele, Tyler J.

    2014-01-01

    It is shown that the overall effect of an exposure on an outcome, in the presence of a mediator with which the exposure may interact, can be decomposed into four components: (i) the effect of the exposure in the absence of the mediator, (ii) the interactive effect when the mediator is left to what it would be in the absence of exposure, (iii) a mediated interaction, and (iv) a pure mediated effect. These four components, respectively, correspond to the portion of the effect that is due to neither mediation nor interaction, to just interaction (but not mediation), to both mediation and interaction, and to just mediation (but not interaction). This four-way decomposition unites methods that attribute effects to interactions and methods that assess mediation. Certain combinations of these four components correspond to measures for mediation, while other combinations correspond to measures of interaction previously proposed in the literature. Prior decompositions in the literature are in essence special cases of this four-way decomposition. The four-way decomposition can be carried out using standard statistical models, and software is provided to estimate each of the four components. The four-way decomposition provides maximum insight into how much of an effect is mediated, how much is due to interaction, how much is due to both mediation and interaction together, and how much is due to neither. PMID:25000145

  15. Characterization of alkyl carbon in forest soils by CPMAS 13C NMR spectroscopy and dipolar dephasing

    USGS Publications Warehouse

    Kogel-Knabner, I.; Hatcher, P.G.

    1989-01-01

    Samples obtained from forest soils at different stages of decomposition were treated sequentially with chloroform/methanol (extraction of lipids), sulfuric acid (hydrolysis), and sodium chlorite (delignification) to enrich them in refractory alkyl carbon. As revealed by NMR spectroscopy, this treatment yielded residues with high contents of alkyl carbon. In the NMR spectra of residues obtained from litter samples, resonances for carbohydrates are also present, indicating that these carbohydrates are tightly bound to the alkyl carbon structures. During decomposition in the soils this resistant carbohydrate fraction is lost almost completely. In the litter samples the alkyl carbon shows a dipolar dephasing behavior indicative of two structural components, a rigid and a more mobile component. As depth and decomposition increase, only the rigid component is observed. This fact could be due to selective degradation of the mobile component or to changes in molecular mobility during decomposition, e.g., because of an increase in cross linking or contact with the mineral matter of the soil.

  16. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  17. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  18. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  19. Forensic age estimation by morphometric analysis of the manubrium from 3D MR images.

    PubMed

    Martínez Vera, Naira P; Höller, Johannes; Widek, Thomas; Neumayer, Bernhard; Ehammer, Thomas; Urschler, Martin

    2017-08-01

    Forensic age estimation research based on skeletal structures focuses on patterns of growth and development using different bones. In this work, our aim was to study growth-related evolution of the manubrium in living adolescents and young adults using magnetic resonance imaging (MRI), which is an image acquisition modality that does not involve ionizing radiation. In a first step, individual manubrium and subject features were correlated with age, which confirmed a statistically significant change of manubrium volume (M vol :p<0.01, R 2 ¯=0.50) and surface area (M sur :p<0.01, R 2 ¯=0.53) for the studied age range. Additionally, shapes of the manubria were for the first time investigated using principal component analysis. The decomposition of the data in principal components allowed to analyse the contribution of each component to total shape variation. With 13 principal components, ∼96% of shape variation could be described (M shp :p<0.01, R 2 ¯=0.60). Multiple linear regression analysis modelled the relationship between the statistically best correlated variables and age. Models including manubrium shape, volume or surface area divided by the height of the subject (Y∼M shp M sur /S h :p<0.01, R 2 ¯=0.71; Y∼M shp M vol /S h :p<0.01, R 2 ¯=0.72) presented a standard error of estimate of two years. In order to estimate the accuracy of these two manubrium-based age estimation models, cross validation experiments predicting age on held-out test sets were performed. Median absolute difference of predicted and known chronological age was 1.18 years for the best performing model (Y∼M shp M sur /S h :p<0.01, R p 2 =0.67). In conclusion, despite limitations in determining legal majority age, manubrium morphometry analysis presented statistically significant results for skeletal age estimation, which indicates that this bone structure may be considered as a new candidate in multi-factorial MRI-based age estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Discrimination of a chestnut-oak forest unit for geologic mapping by means of a principal component enhancement of Landsat multispectral scanner data.

    USGS Publications Warehouse

    Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.

    1981-01-01

    A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors

  1. On the Composition of Risk Preference and Belief

    ERIC Educational Resources Information Center

    Wakkar, Peter P.

    2004-01-01

    Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…

  2. Temperature responses of individual soil organic matter components

    NASA Astrophysics Data System (ADS)

    Feng, Xiaojuan; Simpson, Myrna J.

    2008-09-01

    Temperature responses of soil organic matter (SOM) remain unclear partly due to its chemical and compositional heterogeneity. In this study, the decomposition of SOM from two grassland soils was investigated in a 1-year laboratory incubation at six different temperatures. SOM was separated into solvent extractable compounds, suberin- and cutin-derived compounds, and lignin-derived monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components have distinct chemical structures and stabilities and their decomposition patterns over the course of the experiment were fitted with a two-pool exponential decay model. The stability of SOM components was also assessed using geochemical parameters and kinetic parameters derived from model fitting. Compared with the solvent extractable compounds, a low percentage of lignin monomers partitioned into the labile SOM pool. Suberin- and cutin-derived compounds were poorly fitted by the decay model, and their recalcitrance was shown by the geochemical degradation parameter (ω - C16/∑C16), which was observed to stabilize during the incubation. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of lignin monomers exhibited higher Q10 values than the decomposition of solvent extractable compounds. Our study shows that Q10 values derived from soil respiration measurements may not be reliable indicators of temperature responses of individual SOM components.

  3. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  4. Molecular Mechanisms in the shock induced decomposition of FOX-7

    NASA Astrophysics Data System (ADS)

    Mishra, Ankit; Tiwari, Subodh C.; Nakano, Aiichiro; Vashishta, Priya; Kalia, Rajiv; CACS Team

    Experimental and first principle computational studies on FOX 7 have either involved a very small system consisting of a few atoms or they did not take into account the decomposition mechanisms under extreme conditions of temperature and pressure. We have performed a large-scale reactive MD simulation using ReaxFF-lg force field to study the shock decomposition of FOX 7. The chemical composition of the principal decomposition products correlates well with experimental observations. Furthermore, we observed that the production of N2 and H2O was inter molecular in nature and was through different chemical pathways. Moreover, the production of CO and CO2 was delayed due to production of large stable C,O atoms cluster. These critical insights into the initial processes involved in the shock induced decomposition of FOX-7 will greatly help in understanding the factors playing an important role in the insensitiveness of this high energy material. This research is supported by AFOSR Award No. FA9550-16-1-0042.

  5. Mass transfer in fuel cells. [electron microscopy of components, thermal decomposition of Teflon, water transport, and surface tension of KOH solutions

    NASA Technical Reports Server (NTRS)

    Walker, R. D., Jr.

    1973-01-01

    Results of experiments on electron microscopy of fuel cell components, thermal decomposition of Teflon by thermogravimetry, surface area and pore size distribution measurements, water transport in fuel cells, and surface tension of KOH solutions are described.

  6. Independent components of neural activity carry information on individual populations.

    PubMed

    Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K

    2014-01-01

    Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges.

  7. Independent Components of Neural Activity Carry Information on Individual Populations

    PubMed Central

    Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K.

    2014-01-01

    Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges. PMID:25153730

  8. Anaerobic decomposition of humic substances by Clostridium from the deep subsurface

    PubMed Central

    Ueno, Akio; Shimizu, Satoru; Tamamura, Shuji; Okuyama, Hidetoshi; Naganuma, Takeshi; Kaneko, Katsuhiko

    2016-01-01

    Decomposition of humic substances (HSs) is a slow and cryptic but non-negligible component of carbon cycling in sediments. Aerobic decomposition of HSs by microorganisms in the surface environment has been well documented; however, the mechanism of anaerobic microbial decomposition of HSs is not completely understood. Moreover, no microorganisms capable of anaerobic decomposition of HSs have been isolated. Here, we report the anaerobic decomposition of humic acids (HAs) by the anaerobic bacterium Clostridium sp. HSAI-1 isolated from the deep terrestrial subsurface. The use of 14C-labelled polycatechol as an HA analogue demonstrated that the bacterium decomposed this substance up to 7.4% over 14 days. The decomposition of commercial and natural HAs by the bacterium yielded lower molecular mass fractions, as determined using high-performance size-exclusion chromatography. Fourier transform infrared spectroscopy revealed the removal of carboxyl groups and polysaccharide-related substances, as well as the generation of aliphatic components, amide and aromatic groups. Therefore, our results suggest that Clostridium sp. HSAI-1 anaerobically decomposes and transforms HSs. This study improves our understanding of the anaerobic decomposition of HSs in the hidden carbon cycling in the Earth’s subsurface. PMID:26743007

  9. A restricted signature normal form for Hermitian matrices, quasi-spectral decompositions, and applications

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Huckle, Thomas

    1989-01-01

    In recent years, a number of results on the relationships between the inertias of Hermitian matrices and the inertias of their principal submatrices appeared in the literature. We study restricted congruence transformation of Hermitian matrices M which, at the same time, induce a congruence transformation of a given principal submatrix A of M. Such transformations lead to concept of the restricted signature normal form of M. In particular, by means of this normal form, we obtain short proofs of most of the known inertia theorems and also derive some new results of this type. For some applications, a special class of almost unitary restricted congruence transformations turns out to be useful. We show that, with such transformations, M can be reduced to a quasi-diagonal form which, in particular, displays the eigenvalues of A. Finally, applications of this quasi-spectral decomposition to generalize inverses and Hermitian matrix pencils are discussed.

  10. Glove-based approach to online signature verification.

    PubMed

    Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A

    2008-06-01

    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.

  11. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  12. Temperature Responses of Soil Organic Matter Components With Varying Recalcitrance

    NASA Astrophysics Data System (ADS)

    Simpson, M. J.; Feng, X.

    2007-12-01

    The response of soil organic matter (SOM) to global warming remains unclear partly due to the chemical heterogeneity of SOM composition. In this study, the decomposition of SOM from two grassland soils was investigated in a one-year laboratory incubation at six different temperatures. SOM was separated into solvent- extractable compounds, suberin- and cutin-derived compounds, and lignin monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components had distinct chemical structures and recalcitrance, and their decomposition was fitted by a two-pool exponential decay model. The stability of SOM components was assessed using geochemical parameters and kinetic parameters derived from model fitting. Lignin monomers exhibited much lower decay rates than solvent-extractable compounds and a relatively low percentage of lignin monomers partitioned into the labile SOM pool, which confirmed the generally accepted recalcitrance of lignin compounds. Suberin- and cutin-derived compounds had a poor fitting for the exponential decay model, and their recalcitrance was shown by the geochemical degradation parameter which stabilized during the incubation. The aliphatic components of suberin degraded faster than cutin-derived compounds, suggesting that cutin-derived compounds in the soil may be at a higher stage of degradation than suberin- derived compounds. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of the recalcitrant lignin monomers had much higher Q10 values than soil respiration or the solvent-extractable compounds decomposition. Our study shows that the decomposition of recalcitrant SOM is highly sensitive to temperature, more so than bulk soil mineralization. This observation suggests a potential acceleration in the degradation of the recalcitrant SOM pool with global warming.

  13. Orthogonal decomposition of left ventricular remodeling in myocardial infarction

    PubMed Central

    Zhang, Xingyu; Medrano-Gracia, Pau; Ambale-Venkatesh, Bharath; Bluemke, David A.; Cowan, Brett R; Finn, J. Paul; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Young, Alistair A.; Suinesiaputra, Avan

    2017-01-01

    Abstract Left ventricular size and shape are important for quantifying cardiac remodeling in response to cardiovascular disease. Geometric remodeling indices have been shown to have prognostic value in predicting adverse events in the clinical literature, but these often describe interrelated shape changes. We developed a novel method for deriving orthogonal remodeling components directly from any (moderately independent) set of clinical remodeling indices. Results: Six clinical remodeling indices (end-diastolic volume index, sphericity, relative wall thickness, ejection fraction, apical conicity, and longitudinal shortening) were evaluated using cardiac magnetic resonance images of 300 patients with myocardial infarction, and 1991 asymptomatic subjects, obtained from the Cardiac Atlas Project. Partial least squares (PLS) regression of left ventricular shape models resulted in remodeling components that were optimally associated with each remodeling index. A Gram–Schmidt orthogonalization process, by which remodeling components were successively removed from the shape space in the order of shape variance explained, resulted in a set of orthonormal remodeling components. Remodeling scores could then be calculated that quantify the amount of each remodeling component present in each case. A one-factor PLS regression led to more decoupling between scores from the different remodeling components across the entire cohort, and zero correlation between clinical indices and subsequent scores. Conclusions: The PLS orthogonal remodeling components had similar power to describe differences between myocardial infarction patients and asymptomatic subjects as principal component analysis, but were better associated with well-understood clinical indices of cardiac remodeling. The data and analyses are available from www.cardiacatlas.org. PMID:28327972

  14. A simple method for decomposition of peracetic acid in a microalgal cultivation system.

    PubMed

    Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won

    2015-03-01

    A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.

  15. Characterization and discrimination of human breast cancer and normal breast tissues using resonance Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.

    2018-02-01

    Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.

  16. Multidecadal climate variability of global lands and oceans

    USGS Publications Warehouse

    McCabe, G.J.; Palecki, M.A.

    2006-01-01

    Principal components analysis (PCA) and singular value decomposition (SVD) are used to identify the primary modes of decadal and multidecadal variability in annual global Palmer Drought Severity Index (PDSI) values and sea-surface temperature (SSTs). The PDSI and SST data for 1925-2003 were detrended and smoothed (with a 10-year moving average) to isolate the decadal and multidecadal variability. The first two principal components (PCs) of the PDSI PCA explained almost 38% of the decadal and multidecadal variance in the detrended and smoothed global annual PDSI data. The first two PCs of detrended and smoothed global annual SSTs explained nearly 56% of the decadal variability in global SSTs. The PDSI PCs and the SST PCs are directly correlated in a pairwise fashion. The first PDSI and SST PCs reflect variability of the detrended and smoothed annual Pacific Decadal Oscillation (PDO), as well as detrended and smoothed annual Indian Ocean SSTs. The second set of PCs is strongly associated with the Atlantic Multidecadal Oscillation (AMO). The SVD analysis of the cross-covariance of the PDSI and SST data confirmed the close link between the PDSI and SST modes of decadal and multidecadal variation and provided a verification of the PCA results. These findings indicate that the major modes of multidecadal variations in SSTs and land-surface climate conditions are highly interrelated through a small number of spatially complex but slowly varying teleconnections. Therefore, these relations may be adaptable to providing improved baseline conditions for seasonal climate forecasting. Published in 2006 by John Wiley & Sons, Ltd.

  17. On the estimation of physical height changes using GRACE satellite mission data - A case study of Central Europe

    NASA Astrophysics Data System (ADS)

    Godah, Walyeldeen; Szelachowska, Małgorzata; Krynski, Jan

    2017-12-01

    The dedicated gravity satellite missions, in particular the GRACE (Gravity Recovery and Climate Experiment) mission launched in 2002, provide unique data for studying temporal variations of mass distribution in the Earth's system, and thereby, the geometry and the gravity fi eld changes of the Earth. The main objective of this contribution is to estimate physical height (e.g. the orthometric/normal height) changes over Central Europe using GRACE satellite mission data as well as to analyse them and model over the selected study area. Physical height changes were estimated from temporal variations of height anomalies and vertical displacements of the Earth surface being determined over the investigated area. The release 5 (RL05) GRACE-based global geopotential models as well as load Love numbers from the Preliminary Reference Earth Model (PREM) were used as input data. Analysis of the estimated physical height changes and their modelling were performed using two methods: the seasonal decomposition method and the PCA/ EOF (Principal Component Analysis/Empirical Orthogonal Function) method and the differences obtained were discussed. The main fi ndings reveal that physical height changes over the selected study area reach up to 22.8 mm. The obtained physical height changes can be modelled with an accuracy of 1.4 mm using the seasonal decomposition method.

  18. Photo diagnosis of early pre cancer (LSIL) in genital tissue

    NASA Astrophysics Data System (ADS)

    Vaitkuviene, A.; Andersen-Engels, S.; Auksorius, E.; Bendsoe, N.; Gavriushin, V.; Gustafsson, U.; Oyama, J.; Palsson, S.; Soto Thompson, M.; Stenram, U.; Svanberg, K.; Viliunas, V.; De Weert, M. J.

    2005-11-01

    Permanent infections recognized as oncogenic factor. STD is common concomitant diseases in early precancerous genital tract lesions. Simple optical detection of early regressive pre cancer in cervix is the aim of this study. Hereditary immunosupression most likely is risk factor for cervical cancer development. Light induced fluorescence point monitoring fitted to live cervical tissue diagnostics in 42 patients. Human papilloma virus DNR in cervix tested by means of Hybrid Capture II method. Ultraviolet (337 nm) laser excited fluorescence spectra in the live cervical tissue analyzed by Principal Component (PrC) regression method and spectra decomposition method. PCr method best discriminated pathology group "CIN I and inflammation"(AUC=75%) related to fluorescence emission in short wave region. Spectra decomposition method suggested a few possible fluorophores in a long wave region. Ultraviolet (398 nm) light excitation of live cervix proved sharp selective spectra intensity enhancement in region above 600nm for High-grade cervical lesion. Conclusion: PC analysis of UV (337 nm) light excitation fluorescence spectra gives opportunity to obtain local immunity and Low-grade cervical lesion related information. Addition of shorter and longer wavelengths is promising for multi wave LIF point monitoring method progress in cervical pre-cancer diagnostics and utility for cancer prevention especially in developing countries.

  19. How well are the climate indices related to the GRACE-observed total water storage changes in China?

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Vishwakarma, B.; Sneeuw, N. J.

    2017-12-01

    The fresh water availability over land masses is changing rapidly under the influence of climate change and human intervention. In order to manage our water resources and plan for a better future, we need to demarcate the role of climate change. The total water storage change in a region can be obtained from the GRACE satellite mission. On the other hand, many climate change indicators, for example ENSO, are derived from sea surface temperature. In this contribution we investigate the relationship between the total water storage change over China with the climate indices using statistical time-series decomposition techniques, such as Seasonal and Trend decomposition using Loess (STL), Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA). The anomalies in climate variables, such as sea surface temperature, are responsible for anomalous precipitation and thus an anomalous total water storage change over land. Therefore, it is imperative that we use a GRACE product that can capture anomalous water storage changes with unprecedented accuracy. Since filtering decreases the sensitivity of GRACE products substantially, we use the data-driven method of deviation for recovering the signal lost due to filtering. To this end, we are able to obtain the spatial fingerprint of individual climate index on total water storage change observed over China.

  20. Compositional aspects of herbaceous litter decomposition in the freshwater marshes of the Florida Everglades

    USDA-ARS?s Scientific Manuscript database

    Litter decomposition in wetlands is an important component of ecosystem function in these detrital systems. In oligotrophic wetlands, such as the Florida Everglades, litter decomposition processes are dependent on nutrient availability and litter quality. However, not much is known about how the che...

  1. Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.

    PubMed

    Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin

    2017-11-01

    Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    NASA Astrophysics Data System (ADS)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  3. On the Fallibility of Principal Components in Research

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2017-01-01

    The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

  4. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods

    PubMed Central

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2018-01-01

    Background: Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results: The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion: This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. PMID:29325403

  5. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  6. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  7. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  8. Three-component seismic data in thin interbedded reservoir exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Li-Yan; Wang, Yan-Chun; Pei, Jiang-Yun

    2015-03-01

    We present the first successful application of three-component seismic data to thin interbedded reservoir characterization in the Daqing placanticline of the LMD oilfield. The oilfield has reached the final high water cut stage and the principal problem is how to recognize the boundaries of sand layers that are thicker than 2 m. Conventional interpretation of single PP-wave seismic data results in multiple solutions, whereas the introduction of PS-wave enhances the reliability of interpretation. We analyze the gas reservoir characteristics by joint PP- and PS-waves, and use the amplitude and frequency decomposition attributes to delineate the gas reservoir boundaries because of the minimal effect of fluids on S-wave. We perform joint inversion of PP- and PS-waves to obtain V P/ V S, λρ, and µ ρ and map the lithology changes by using density, λρ, and µ ρ. The 3D-3C attribute λρ slices describe the sand layers distribution, while considering the well log data, and point to favorable region for tapping the remaining oil.

  9. Structure-affinity relationships for the binding of actinomycin D to DNA

    NASA Astrophysics Data System (ADS)

    Gallego, José; Ortiz, Angel R.; de Pascual-Teresa, Beatriz; Gago, Federico

    1997-03-01

    Molecular models of the complexes between actinomycin D and 14 different DNA hexamers were built based on the X-ray crystal structure of the actinomycin-d(GAAGCTTC)2 complex. The DNA sequences included the canonical GpC binding step flanked by different base pairs, nonclassical binding sites such as GpG and GpT, and sites containing 2,6-diamino- purine. A good correlation was found between the intermolecular interaction energies calculated for the refined complexes and the relative preferences of actinomycin binding to standard and modified DNA. A detailed energy decomposition into van der Waals and electrostatic components for the interactions between the DNA base pairs and either the chromophore or the peptidic part of the antibiotic was performed for each complex. The resulting energy matrix was then subjected to principal component analysis, which showed that actinomycin D discriminates among different DNA sequences by an interplay of hydrogen bonding and stacking interactions. The structure-affinity relationships for this important antitumor drug are thus rationalized and may be used to advantage in the design of novel sequence-specific DNA-binding agents.

  10. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  11. Decomposition Techniques for Icesat/glas Full-Waveform Data

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Gao, X.; Li, G.; Chen, J.

    2018-04-01

    The geoscience laser altimeter system (GLAS) on the board Ice, Cloud, and land Elevation Satellite (ICESat), is the first long-duration space borne full-waveform LiDAR for measuring the topography of the ice shelf and temporal variation, cloud and atmospheric characteristics. In order to extract the characteristic parameters of the waveform, the key step is to process the full waveform data. In this paper, the modified waveform decomposition method is proposed to extract the echo components from full-waveform. First, the initial parameter estimation is implemented through data preprocessing and waveform detection. Next, the waveform fitting is demonstrated using the Levenberg-Marquard (LM) optimization method. The results show that the modified waveform decomposition method can effectively extract the overlapped echo components and missing echo components compared with the results from GLA14 product. The echo components can also be extracted from the complex waveforms.

  12. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  13. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  14. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  15. TG-MS analysis and kinetic study for thermal decomposition of six representative components of municipal solid waste under steam atmosphere.

    PubMed

    Zhang, Jinzhi; Chen, Tianju; Wu, Jingli; Wu, Jinhu

    2015-09-01

    Thermal decomposition of six representative components of municipal solid waste (MSW, including lignin, printing paper, cotton, rubber, polyvinyl chloride (PVC) and cabbage) was investigated by thermogravimetric-mass spectroscopy (TG-MS) under steam atmosphere. Compared with TG and derivative thermogravimetric (DTG) curves under N2 atmosphere, thermal decomposition of MSW components under steam atmosphere was divided into pyrolysis and gasification stages. In the pyrolysis stage, the shapes of TG and DTG curves under steam atmosphere were almost the same with those under N2 atmosphere. In the gasification stage, the presence of steam led to a greater mass loss because of the steam partial oxidation of char residue. The evolution profiles of H2, CH4, CO and CO2 were well consistent with DTG curves in terms of appearance of peaks and relevant stages in the whole temperature range, and the steam partial oxidation of char residue promoted the generation of more gas products in high temperature range. The multi-Gaussian distributed activation energy model (DAEM) was proved plausible to describe thermal decomposition behaviours of MSW components under steam atmosphere. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate

    PubMed Central

    Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.

    2009-01-01

    Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786

  17. SPIDERS IN DECOMPOSITION FOOD WEBS OF AGROECOSYSTEMS: THEORY AND EVIDENCE. (R826099)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  18. Discrimination of gender-, speed-, and shoe-dependent movement patterns in runners using full-body kinematics.

    PubMed

    Maurer, Christian; Federolf, Peter; von Tscharner, Vinzenz; Stirling, Lisa; Nigg, Benno M

    2012-05-01

    Changes in gait kinematics have often been analyzed using pattern recognition methods such as principal component analysis (PCA). It is usually just the first few principal components that are analyzed, because they describe the main variability within a dataset and thus represent the main movement patterns. However, while subtle changes in gait pattern (for instance, due to different footwear) may not change main movement patterns, they may affect movements represented by higher principal components. This study was designed to test two hypotheses: (1) speed and gender differences can be observed in the first principal components, and (2) small interventions such as changing footwear change the gait characteristics of higher principal components. Kinematic changes due to different running conditions (speed - 3.1m/s and 4.9 m/s, gender, and footwear - control shoe and adidas MicroBounce shoe) were investigated by applying PCA and support vector machine (SVM) to a full-body reflective marker setup. Differences in speed changed the basic movement pattern, as was reflected by a change in the time-dependent coefficient derived from the first principal. Gender was differentiated by using the time-dependent coefficient derived from intermediate principal components. (Intermediate principal components are characterized by limb rotations of the thigh and shank.) Different shoe conditions were identified in higher principal components. This study showed that different interventions can be analyzed using a full-body kinematic approach. Within the well-defined vector space spanned by the data of all subjects, higher principal components should also be considered because these components show the differences that result from small interventions such as footwear changes. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  19. An original data treatment for infrared spectra of organic matter, application to extracted soil organic matter

    NASA Astrophysics Data System (ADS)

    Gomes Rossin, Bruna; Redon, Roland; Raynaud, Michel; Nascimento, Nadia Regina; Mounier, Stéphane

    2017-04-01

    Infrared spectra of extracted organic matter are easy and rapid to do, but generally hard to interpreted over the presence or not of certain organic functions. Indeed, the organic matter is a complex mixture of molecules often having absorption overlapping and it is also difficult to have a well calibrated or normalised spectra due to the difficulty to have a well known solid content or homogeneity for a sample (Monakhova et al. 2015, Tadini et al. 2015, Bardy et al. 2008). In this work, the IRTF (InfraRed Fourier Transform) spectra were treated by an original algorithm developed to obtain the principal components of the IRTF spectra and their contributions for each sample. This bilinear decomposition used a PCA initialisation and the principal components were estimated from vectors calculated by PCA and linearly combined to provide non-negative signals minimizing a criterion based on cross-correlation. Hence, using this decomposition, it is possible to define IRTF signal of organic matter fractions like humic acid or fulvic acid depending on their origin like surface of depth of soil profiles. The method was used on a set of sample from Upper Negro River Basin (Amazon, Brazil) (Bueno,2009), where three soils sequences from surface to two meter depth containing 10 slices each. The sequences were sampled on a podzol well drain, a hydromorphic podzol and a cryptopodzol. From the IRTF data five representative component spectra were defined for all the extracted organic matter , and using other chemical composition information, a mechanism of organic matter fate is proposed to explain the observed results. Bardy, M., E. Fritsch, S. Derenne, T. Allard, N. R. do Nascimento, and G. T. Bueno. 2008. "Micromorphology and Spectroscopic Characteristics of Organic Matter in Waterlogged Podzols of the Upper Amazon Basin." Geoderma 145 (3-4): 222-30. Bueno, G.T. Appauvrissement et podzolisation des latérites du baissin du Rio Negro et gênese dês Podzols dans le haut bassin amazonien. [PHD] .Universidade Estadual Paulista "Júlio de Mesquita Filho";2009. Monakhova, Yulia B., Alexey M. Tsikin, Svetlana P. Mushtakova, and Mauro Mecozzi. 2015. "Independent Component Analysis and Multivariate Curve Resolution to Improve Spectral Interpretation of Complex Spectroscopic Data Sets: Application to Infrared Spectra of Marine Organic Matter Aggregates." Microchemical Journal, Devoted to the Application of Microtechniques in All Branches of Science 118 (January): 211-22. Tadini, Amanda Maria, Gustavo Nicolodelli, Stephane Mounier, Célia Regina Montes, and Débora Marcondes Bastos Pereira Milori. 2015. "The Importance of Humin in Soil Characterisation: A Study on Amazonian Soils Using Different Fluorescence Techniques." The Science of the Total Environment 537 (December): 152-58.

  20. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  1. On the application of the Principal Component Analysis for an efficient climate downscaling of surface wind fields

    NASA Astrophysics Data System (ADS)

    Chavez, Roberto; Lozano, Sergio; Correia, Pedro; Sanz-Rodrigo, Javier; Probst, Oliver

    2013-04-01

    With the purpose of efficiently and reliably generating long-term wind resource maps for the wind energy industry, the application and verification of a statistical methodology for the climate downscaling of wind fields at surface level is presented in this work. This procedure is based on the combination of the Monte Carlo and the Principal Component Analysis (PCA) statistical methods. Firstly the Monte Carlo method is used to create a huge number of daily-based annual time series, so called climate representative years, by the stratified sampling of a 33-year-long time series corresponding to the available period of the NCAR/NCEP global reanalysis data set (R-2). Secondly the representative years are evaluated such that the best set is chosen according to its capability to recreate the Sea Level Pressure (SLP) temporal and spatial fields from the R-2 data set. The measure of this correspondence is based on the Euclidean distance between the Empirical Orthogonal Functions (EOF) spaces generated by the PCA (Principal Component Analysis) decomposition of the SLP fields from both the long-term and the representative year data sets. The methodology was verified by comparing the selected 365-days period against a 9-year period of wind fields generated by dynamical downscaling the Global Forecast System data with the mesoscale model SKIRON for the Iberian Peninsula. These results showed that, compared to the traditional method of dynamical downscaling any random 365-days period, the error in the average wind velocity by the PCA's representative year was reduced by almost 30%. Moreover the Mean Absolute Errors (MAE) in the monthly and daily wind profiles were also reduced by almost 25% along all SKIRON grid points. These results showed also that the methodology presented maximum error values in the wind speed mean of 0.8 m/s and maximum MAE in the monthly curves of 0.7 m/s. Besides the bulk numbers, this work shows the spatial distribution of the errors across the Iberian domain and additional wind statistics such as the velocity and directional frequency. Additional repetitions were performed to prove the reliability and robustness of this kind-of statistical-dynamical downscaling method.

  2. Nitrogen Addition Altered the Effect of Belowground C Allocation on Soil Respiration in a Subtropical Forest

    PubMed Central

    He, Tongxin; Wang, Qingkui; Wang, Silong; Zhang, Fangyue

    2016-01-01

    The availabilities of carbon (C) and nitrogen (N) in soil play an important role in soil carbon dioxide (CO2) emission. However, the variation in the soil respiration (Rs) and response of microbial community to the combined changes in belowground C and N inputs in forest ecosystems are not yet fully understood. Stem girdling and N addition were performed in this study to evaluate the effects of C supply and N availability on Rs and soil microbial community in a subtropical forest. The trees were girdled on 1 July 2012. Rs was monitored from July 2012 to November 2013, and soil microbial community composition was also examined by phospholipid fatty acids (PLFAs) 1 year after girdling. Results showed that Rs decreased by 40.5% with girdling alone, but N addition only did not change Rs. Interestingly, Rs decreased by 62.7% under the girdling with N addition treatment. The reducing effect of girdling and N addition on Rs differed between dormant and growing seasons. Girdling alone reduced Rs by 33.9% in the dormant season and 54.8% in the growing season compared with the control. By contrast, girdling with N addition decreased Rs by 59.5% in the dormant season and 65.4% in the growing season. Girdling and N addition significantly decreased the total and bacterial PLFAs. Moreover, the effect of N addition was greater than girdling. Both girdling and N addition treatments separated the microbial groups on the basis of the first principal component through principal component analysis compared with control. This indicated that girdling and N addition changed the soil microbial community composition. However, the effect of girdling with N addition treatment separated the microbial groups on the basis of the second principal component compared to N addition treatment, which suggested N addition altered the effect of girdling on soil microbial community composition. These results suggest that the increase in soil N availability by N deposition alters the effect of belowground C allocation on the decomposition of soil organic matter by altering the composition of the soil microbial community. PMID:27213934

  3. Orthogonal decomposition of left ventricular remodeling in myocardial infarction.

    PubMed

    Zhang, Xingyu; Medrano-Gracia, Pau; Ambale-Venkatesh, Bharath; Bluemke, David A; Cowan, Brett R; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Young, Alistair A; Suinesiaputra, Avan

    2017-03-01

    Left ventricular size and shape are important for quantifying cardiac remodeling in response to cardiovascular disease. Geometric remodeling indices have been shown to have prognostic value in predicting adverse events in the clinical literature, but these often describe interrelated shape changes. We developed a novel method for deriving orthogonal remodeling components directly from any (moderately independent) set of clinical remodeling indices. Six clinical remodeling indices (end-diastolic volume index, sphericity, relative wall thickness, ejection fraction, apical conicity, and longitudinal shortening) were evaluated using cardiac magnetic resonance images of 300 patients with myocardial infarction, and 1991 asymptomatic subjects, obtained from the Cardiac Atlas Project. Partial least squares (PLS) regression of left ventricular shape models resulted in remodeling components that were optimally associated with each remodeling index. A Gram-Schmidt orthogonalization process, by which remodeling components were successively removed from the shape space in the order of shape variance explained, resulted in a set of orthonormal remodeling components. Remodeling scores could then be calculated that quantify the amount of each remodeling component present in each case. A one-factor PLS regression led to more decoupling between scores from the different remodeling components across the entire cohort, and zero correlation between clinical indices and subsequent scores. The PLS orthogonal remodeling components had similar power to describe differences between myocardial infarction patients and asymptomatic subjects as principal component analysis, but were better associated with well-understood clinical indices of cardiac remodeling. The data and analyses are available from www.cardiacatlas.org. © The Author 2017. Published by Oxford University Press.

  4. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowal, Grzegorz; Lazarian, A., E-mail: kowal@astro.wisc.ed, E-mail: lazarian@astro.wisc.ed

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field referencemore » frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.« less

  5. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  6. Aircraft vortex marking program

    NASA Technical Reports Server (NTRS)

    Pompa, M. F.

    1979-01-01

    A simple, reliable device for identifying atmospheric vortices, principally as generated by in-flight aircraft and with emphasis on the use of nonpolluting aerosols for marking by injection into such vortex (-ices) is presented. The refractive index and droplet size were determined from an analysis of aerosol optical and transport properties as the most significant parameters in effecting vortex optimum light scattering (for visual sighting) and visual persistency of at least 300 sec. The analysis also showed that a steam-ejected tetraethylene glycol aerosol with droplet size near 1 micron and refractive index of approximately 1.45 could be a promising candidate for vortex marking. A marking aerosol was successfully generated with the steam-tetraethylene glycol mixture from breadboard system hardware. A compact 25 lb/f thrust (nominal) H2O2 rocket chamber was the key component of the system which produced the required steam by catalytic decomposition of the supplied H2O2.

  7. Investigating the Impact of Asp181 Point Mutations on Interactions between PTP1B and Phosphotyrosine Substrate

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Wang, Lushan; Sun, Xun; Zhao, Xian

    2014-05-01

    Protein tyrosine phosphatase 1B (PTP1B) is a key negative regulator of insulin and leptin signaling, which suggests that it is an attractive therapeutic target in type II diabetes and obesity. The aim of this research is to explore residues which interact with phosphotyrosine substrate can be affected by D181 point mutations and lead to increased substrate binding. To achieve this goal, molecular dynamics simulations were performed on wild type (WT) and two mutated PTP1B/substrate complexes. The cross-correlation and principal component analyses show that point mutations can affect the motions of some residues in the active site of PTP1B. Moreover, the hydrogen bond and energy decomposition analyses indicate that apart from residue 181, point mutations have influence on the interactions of substrate with several residues in the active site of PTP1B.

  8. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  9. Developing a Complex Independent Component Analysis (CICA) Technique to Extract Non-stationary Patterns from Geophysical Time Series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael

    2017-12-01

    In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karwacki, C.J.; Buchanan, J.H.; Mahle, J.J.

    Experimental data are reported for the desorption of bis-2-chloroethyl sulfide, (a sulfur mustard or HD) and its decomposition products from activated coconut shell carbon (CSC). The results show that under equilibrium conditions changes in the HD partial pressure are affected primarily by its loading and temperature of the adsorbent. The partial pressure of adsorbed HD is found to increase by about a decade for each 25 C increase in temperature for CSC containing 0.01--0.1 g/g HD. Adsorption equilibria of HD appear to be little affected by coadsorbed water. Although complicated by its decomposition, the distribution of adsorbed HD (of knownmore » amount) appears to occupy pores of similar energy whether dry or in the presence of adsorbed water. On dry CSC adsorbed HD appears stable, while in the presence of water its decomposition is marked by hydrolysis at low temperature and thermal decomposition at elevated temperatures. The principal volatile products desorbed are 1,4-thioxane, 2-chloroethyl vinyl sulfide and 1,4-dithiane, with the latter favoring elevated temperatures.« less

  11. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  12. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  13. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  14. On the decomposition of synchronous state mechines using sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Hebbalalu, K.; Whitaker, S.; Cameron, K.

    1992-01-01

    This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.

  15. Mother's education is the most important factor in socio-economic inequality of child stunting in Iran.

    PubMed

    Emamian, Mohammad Hassan; Fateh, Mansooreh; Gorgani, Neman; Fotouhi, Akbar

    2014-09-01

    Malnutrition is one of the most important health problems, especially in developing countries. The present study aimed to describe the socio-economic inequality in stunting and its determinants in Iran for the first time. Cross-sectional, population-based survey, carried out in 2009. Using randomized cluster sampling, weight and height of children were measured and anthropometric indices were calculated based on child growth standards given by the WHO. Socio-economic status of families was determined using principal component analysis on household assets and social specifications of families. The concentration index was used to calculate socio-economic inequality in stunting and its determinants were measured by decomposition of this index. Factors affecting the gap between socio-economic groups were recognized by using the Oaxaca-Blinder decomposition method. Shahroud District in north-eastern Iran. Children (n 1395) aged <6 years. The concentration index for socio-economic inequality in stunting was -0·1913. Mother's education contributed 70 % in decomposition of this index. Mean height-for-age Z-score was -0·544 and -0·335 for low and high socio-economic groups, respectively. Mother's education was the factor contributing most to the gap between these two groups. There was a significant socio-economic inequality in the studied children. If mother's education is distributed equally in all the different groups of Iranian society, one can expect to eliminate 70 % of the socio-economic inequalities. Even in high socio-economic groups, the mean height-for-age Z-score was lower than the international standards. These issues emphasize the necessity of applying new interventions especially for the improvement of maternal education.

  16. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  17. The Butterflies of Principal Components: A Case of Ultrafine-Grained Polyphase Units

    NASA Astrophysics Data System (ADS)

    Rietmeijer, F. J. M.

    1996-03-01

    Dusts in the accretion regions of chondritic interplanetary dust particles [IDPs] consisted of three principal components: carbonaceous units [CUs], carbon-bearing chondritic units [GUs] and carbon-free silicate units [PUs]. Among others, differences among chondritic IDP morphologies and variable bulk C/Si ratios reflect variable mixtures of principal components. The spherical shapes of the initially amorphous principal components remain visible in many chondritic porous IDPs but fusion was documented for CUs, GUs and PUs. The PUs occur as coarse- and ultrafine-grained units that include so called GEMS. Spherical principal components preserved in an IDP as recognisable textural units have unique proporties with important implications for their petrological evolution from pre-accretion processing to protoplanet alteration and dynamic pyrometamorphism. Throughout their lifetime the units behaved as closed-systems without chemical exchange with other units. This behaviour is reflected in their mineralogies while the bulk compositions of principal components define the environments wherein they were formed.

  18. Polarimetric scattering model for estimation of above ground biomass of multilayer vegetation using ALOS-PALSAR quad-pol data

    NASA Astrophysics Data System (ADS)

    Sai Bharadwaj, P.; Kumar, Shashi; Kushwaha, S. P. S.; Bijker, Wietske

    Forests are important biomes covering a major part of the vegetation on the Earth, and as such account for seventy percent of the carbon present in living beings. The value of a forest's above ground biomass (AGB) is considered as an important parameter for the estimation of global carbon content. In the present study, the quad-pol ALOS-PALSAR data was used for the estimation of AGB for the Dudhwa National Park, India. For this purpose, polarimetric decomposition components and an Extended Water Cloud Model (EWCM) were used. The PolSAR data orientation angle shifts were compensated for before the polarimetric decomposition. The scattering components obtained from the polarimetric decomposition were used in the Water Cloud Model (WCM). The WCM was extended for higher order interactions like double bounce scattering. The parameters of the EWCM were retrieved using the field measurements and the decomposition components. Finally, the relationship between the estimated AGB and measured AGB was assessed. The coefficient of determination (R2) and root mean square error (RMSE) were 0.4341 and 119 t/ha respectively.

  19. A decomposition model and voxel selection framework for fMRI analysis to predict neural response of visual stimuli.

    PubMed

    Raut, Savita V; Yadav, Dinkar M

    2018-03-28

    This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.

  20. [Effects of snow cover on water soluble and organic solvent soluble components during foliar litter decomposition in an alpine forest].

    PubMed

    Xu, Li-Ya; Yang, Wan-Qin; Li, Han; Ni, Xiang-Yin; He, Jie; Wu, Fu-Zhong

    2014-11-01

    Seasonal snow cover may change the characteristics of freezing, leaching and freeze-thaw cycles in the scenario of climate change, and then play important roles in the dynamics of water soluble and organic solvent soluble components during foliar litter decomposition in the alpine forest. Therefore, a field litterbag experiment was conducted in an alpine forest in western Sichuan, China. The foliar litterbags of typical tree species (birch, cypress, larch and fir) and shrub species (willow and azalea) were placed on the forest floor under different snow cover thickness (deep snow, medium snow, thin snow and no snow). The litterbags were sampled at snow formation stage, snow cover stage and snow melting stage in winter. The results showed that the content of water soluble components from six foliar litters decreased at snow formation stage and snow melting stage, but increased at snow cover stage as litter decomposition proceeded in the winter. Besides the content of organic solvent soluble components from azalea foliar litter increased at snow cover stage, the content of organic solvent soluble components from the other five foliar litters kept a continue decreasing tendency in the winter. Compared with the content of organic solvent soluble components, the content of water soluble components was affected more strongly by snow cover thickness, especially at snow formation stage and snow cover stage. Compared with the thicker snow covers, the thin snow cover promoted the decrease of water soluble component contents from willow and azalea foliar litter and restrain the decrease of water soluble component content from cypress foliar litter. Few changes in the content of water soluble components from birch, fir and larch foliar litter were observed under the different thicknesses of snow cover. The results suggested that the effects of snow cover on the contents of water soluble and organic solvent soluble components during litter decomposition would be controlled by litter quality.

  1. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    PubMed

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  2. Quantitative Comparison of the Variability in Observed and Simulated Shortwave Reflectance

    NASA Technical Reports Server (NTRS)

    Roberts, Yolanda, L.; Pilewskie, P.; Kindel, B. C.; Feldman, D. R.; Collins, W. D.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system that has been designed to monitor the Earth's climate with unprecedented absolute radiometric accuracy and SI traceability. Climate Observation System Simulation Experiments (OSSEs) have been generated to simulate CLARREO hyperspectral shortwave imager measurements to help define the measurement characteristics needed for CLARREO to achieve its objectives. To evaluate how well the OSSE-simulated reflectance spectra reproduce the Earth s climate variability at the beginning of the 21st century, we compared the variability of the OSSE reflectance spectra to that of the reflectance spectra measured by the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY). Principal component analysis (PCA) is a multivariate decomposition technique used to represent and study the variability of hyperspectral radiation measurements. Using PCA, between 99.7%and 99.9%of the total variance the OSSE and SCIAMACHY data sets can be explained by subspaces defined by six principal components (PCs). To quantify how much information is shared between the simulated and observed data sets, we spectrally decomposed the intersection of the two data set subspaces. The results from four cases in 2004 showed that the two data sets share eight (January and October) and seven (April and July) dimensions, which correspond to about 99.9% of the total SCIAMACHY variance for each month. The spectral nature of these shared spaces, understood by examining the transformed eigenvectors calculated from the subspace intersections, exhibit similar physical characteristics to the original PCs calculated from each data set, such as water vapor absorption, vegetation reflectance, and cloud reflectance.

  3. Nucleation and Spinodal Decomposition in Ternary-Component Alloys

    DTIC Science & Technology

    2009-07-30

    at a high temperature and then rapidly quenching or cooling the mixture to form a solid. During the process of quenching , the components undergo a...Barbara Stoth, and Thomas Wanner, Spinodal Decomposition for Multicomponent Cahn-Hilliard Systems, Journal of Statistical Physics 98 (1999), 871–895...Avenue, New York, New York, 1988. 12 C. ACKERMANN AND W. HARDESTY Department of Mathematics, Virgina Tech Department of Mathematics and Statistics

  4. Multiple-component Decomposition from Millimeter Single-channel Data

    NASA Astrophysics Data System (ADS)

    Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros

    2018-03-01

    We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.

  5. Investigating carbon dynamics in Siberian peat bogs using molecular-level analyses

    NASA Astrophysics Data System (ADS)

    Kaiser, K.; Benner, R. H.

    2013-12-01

    Total hydrolysable carbohydrates, and lignin and cutin acid compounds were analyzed in peat cores collected 56.8 N (SIB04), 58.4 N (SIB06), 63.8 N (G137) and 66.5 N (E113) in the Western Siberian Lowland to investigate vegetation, chemical compositions and the stage of decomposition. Sphagnum mosses dominated peatland vegetation in all four cores. High-resolution molecular analyses revealed rapid vegetation changes on timescales of 50-200 years in the southern cores Sib4 and Sib6. Syringyl and vanillyl (S/V) ratios and cutin acids indicated these vegetation changes were due to varying inputs of angiosperm and gymnosperm and root material. In the G137 and E113 cores lichens briefly replaced sphagnum mosses and vascular plants. Molecular decomposition indicators used in this study tracked the decomposition of different organic constituents of peat organic matter. The carbohydrate decomposition index was sensitive to the polysaccharide component of all peat-forming plants, whereas acid/aldehyde ratios of S and V phenols (Ac/AlS,V) followed the lignin component of vascular plants. Low carbohydrate decomposition indices in peat layers corresponded well with elevated (Ad/Al)S,V ratios. This suggested both classes of biochemicals were simultaneously decomposed, and decomposition processes were associated with extensive total mass loss in these ombrotrophic systems. Selective decomposition or transformation of lignin was observed in the permafrost-influenced northern cores G137 and E113. Both cores exhibited the highest (Ad/Al)S,V ratios, almost four-fold higher than measured in peat-forming plants. The extent of decomposition in the four peat cores did not uniformly increase with age, but showed episodic extensive decomposition events. Variable decomposition events independent of climatic conditions and vegetation shifts highlight the complexity of peatland dynamics.

  6. Reduced nonlinear prognostic model construction from high-dimensional data

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2017-04-01

    Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  7. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  8. Theoretical study of the decomposition mechanism of environmentally friendly insulating medium C3F7CN in the presence of H2O in a discharge

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxing; Li, Yi; Xiao, Song; Tian, Shuangshuang; Deng, Zaitao; Tang, Ju

    2017-08-01

    C3F7CN has been the focus of the alternative gas research field over the past two years because of its excellent insulation properties and environmental characteristics. Experimental studies on its insulation performance have made many achievements. However, few studies on the formation mechanism of the decomposition components exist. A discussion of the decomposition characteristics of insulating media will provide guidance for scientific experimental research and the work that must be completed before further engineering application. In this study, the decomposition mechanism of C3F7CN in the presence of trace H2O under discharge was calculated based on the density functional theory and transition state theory. The reaction heat, Gibbs free energy, and activation energy of different decomposition pathways were investigated. The ionization parameters and toxicity of C3F7CN and various decomposition products were analyzed from the molecular structure perspective. The formation mechanism of the C3F7CN discharge decomposition components and the influence of trace water were evaluated. This paper confirms that C3F7CN has excellent decomposition characteristics, which provide theoretical support for later experiments and related engineering applications. However, the existence of trace water has a negative impact on C3F7CN’s insulation performance. Thus, strict trace water content standards should be developed to ensure dielectric insulation and the safety of maintenance personnel.

  9. A discrete structure of the brain waves.

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Perotti, Luca; oscillons in biological rhythms Collaboration; physics of biological rhythms Team

    A physiological interpretation of the biological rhythms, e.g., of the local field potentials (LFP) depends on the mathematical approaches used for the analysis. Most existing mathematical methods are based on decomposing the signal into a set of ``primitives,'' e.g., sinusoidal harmonics, and correlating them with different cognitive and behavioral phenomena. A common feature of all these methods is that the decomposition semantics is presumed from the onset, and the goal of the subsequent analysis reduces merely to identifying the combination that best reproduces the original signal. We propose a fundamentally new method in which the decomposition components are discovered empirically, and demonstrate that it is more flexible and more sensitive to the signal's structure than the standard Fourier method. Applying this method to the rodent LFP signals reveals a fundamentally new structure of these ``brain waves.'' In particular, our results suggest that the LFP oscillations consist of a superposition of a small, discrete set of frequency modulated oscillatory processes, which we call ``oscillons''. Since these structures are discovered empirically, we hypothesize that they may capture the signal's actual physical structure, i.e., the pattern of synchronous activity in neuronal ensembles. Proving this hypothesis will help to advance our principal understanding of the neuronal synchronization mechanisms and reveal new structure within the LFPs and other biological oscillations. NSF 1422438 Grant, Houston Bioinformatics Endowment Fund.

  10. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  11. Nitric Acid Uptake and Decomposition on Black Carbon (Soot) Surfaces: Its Implications for the Upper Troposphere and Lower Stratosphere

    NASA Technical Reports Server (NTRS)

    Choi, W.; Leu, M. T.

    1998-01-01

    Black carbon particles (soot) are formed as a result of incomplete combustion processes and are ubiquitous in the atmosphere. The lower troposphere contains plenty of soot particles whose principal sources are fossil fuel and biomass combustion at the ground level.

  12. PHOTOREACTIVITY OF CHROMOPHORIC DISSOLVED ORGANIC MATTER (CDOM) DERIVED FROM DECOMPOSITION OF VARIOUS VASCULAR PLANT AND ALGAL SOURCES. (R826939)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  13. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  14. Descent theory for semiorthogonal decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elagin, Alexei D

    We put forward a method for constructing semiorthogonal decompositions of the derived category of G-equivariant sheaves on a variety X under the assumption that the derived category of sheaves on X admits a semiorthogonal decomposition with components preserved by the action of the group G on X. This method is used to obtain semiorthogonal decompositions of equivariant derived categories for projective bundles and blow-ups with a smooth centre as well as for varieties with a full exceptional collection preserved by the group action. Our main technical tool is descent theory for derived categories. Bibliography: 12 titles.

  15. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  16. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  17. Chemical stability of molten 2,4,6-trinitrotoluene at high pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dattelbaum, Dana M., E-mail: danadat@lanl.gov; Chellappa, Raja S.; Bowden, Patrick R.

    2014-01-13

    2,4,6-trinitrotoluene (TNT) is a molecular explosive that exhibits chemical stability in the molten phase at ambient pressure. A combination of visual, spectroscopic, and structural (x-ray diffraction) methods coupled to high pressure, resistively heated diamond anvil cells was used to determine the melt and decomposition boundaries to >15 GPa. The chemical stability of molten TNT was found to be limited, existing in a small domain of pressure-temperature conditions below 2 GPa. Decomposition dominates the phase diagram at high temperatures beyond 6 GPa. From the calculated bulk temperature rise, we conclude that it is unlikely that TNT melts on its principal Hugoniot.

  18. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  19. Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images

    PubMed Central

    Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali

    2015-01-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077

  20. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  1. A Features Selection for Crops Classification

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Shao, Luyi; Yin, Qiang; Hong, Wen

    2016-08-01

    The components of the polarimetric target decomposition reflect the differences of target since they linked with the scattering properties of the target and can be imported into SVM as the classification features. The result of decomposition usually concentrate on part of the components. Selecting a combination of components can reduce the features that importing into the SVM. The features reduction can lead to less calculation and targeted classification of one target when we classify a multi-class area. In this research, we import different combinations of features into the SVM and find a better combination for classification with a data of AGRISAR.

  2. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods.

    PubMed

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2017-04-22

    Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

  3. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  4. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  5. Coating for components requiring hydrogen peroxide compatibility

    NASA Technical Reports Server (NTRS)

    Yousefiani, Ali (Inventor)

    2010-01-01

    The present invention provides a heretofore-unknown use for zirconium nitride as a hydrogen peroxide compatible protective coating that was discovered to be useful to protect components that catalyze the decomposition of hydrogen peroxide or corrode when exposed to hydrogen peroxide. A zirconium nitride coating of the invention may be applied to a variety of substrates (e.g., metals) using art-recognized techniques, such as plasma vapor deposition. The present invention further provides components and articles of manufacture having hydrogen peroxide compatibility, particularly components for use in aerospace and industrial manufacturing applications. The zirconium nitride barrier coating of the invention provides protection from corrosion by reaction with hydrogen peroxide, as well as prevention of hydrogen peroxide decomposition.

  6. Singular-value decomposition of a tomosynthesis system

    PubMed Central

    Burvall, Anna; Barrett, Harrison H.; Myers, Kyle J.; Dainty, Christopher

    2010-01-01

    Tomosynthesis is an emerging technique with potential to replace mammography, since it gives 3D information at a relatively small increase in dose and cost. We present an analytical singular-value decomposition of a tomosynthesis system, which provides the measurement component of any given object. The method is demonstrated on an example object. The measurement component can be used as a reconstruction of the object, and can also be utilized in future observer studies of tomosynthesis image quality. PMID:20940966

  7. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  8. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    PubMed Central

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  9. Exploring Galaxy Formation and Evolution via Structural Decomposition

    NASA Astrophysics Data System (ADS)

    Kelvin, Lee; Driver, Simon; Robotham, Aaron; Hill, David; Cameron, Ewan

    2010-06-01

    The Galaxy And Mass Assembly (GAMA) structural decomposition pipeline (GAMA-SIGMA Structural Investigation of Galaxies via Model Analysis) will provide multi-component information for a sample of ~12,000 galaxies across 9 bands ranging from near-UV to near-IR. This will allow the relationship between structural properties and broadband, optical-to-near-IR, spectral energy distributions of bulge, bar, and disk components to be explored, revealing clues as to the history of baryonic mass assembly within a hierarchical clustering framework. Data is initially taken from the SDSS & UKIDSS-LAS surveys to test the robustness of our automated decomposition pipeline. This will eventually be replaced with the forthcoming higher-resolution VST & VISTA surveys data, expanding the sample to ~30,000 galaxies.

  10. An Introductory Application of Principal Components to Cricket Data

    ERIC Educational Resources Information Center

    Manage, Ananda B. W.; Scariano, Stephen M.

    2013-01-01

    Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…

  11. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  12. Identifying apple surface defects using principal components analysis and artifical neural networks

    USDA-ARS?s Scientific Manuscript database

    Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...

  13. Influence of climate change factors on carbon dynamics in northern forested peatlands

    Treesearch

    C.C Trettin; R. Laiho; K. Minkkinen; J. Laine

    2005-01-01

    Peatlands are carbon-accumulating wetland ecosystems, developed through an imbalance among organic matter production and decomposition processes. Soil saturation is the principal cause of anoxic conditions that constrain organic matter decay. Accordingly, changes in the hydrologic regime will affect the carbon (C) dynamics in forested peatlands. Our objective is to...

  14. SEASONAL AND LONG-TERM TREND DECOMPOSITION ALONG A SPATIAL GRADIENT: AN APPLICATION TO NUTRIENT DATA IN THE NEUSE RIVER WATERSHED. (U915590)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  15. ACTIVE AND SPECTATOR ADSORBATE SPECIES DURING NO DECOMPOSITION OVER CU-ZSM-5: TRANSIENT IR, SITE-POISONING, AND PROMOTION STUDIES (R823529)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  16. Molecular carbon isotopic evidence for the origin of geothermal hydrocarbons

    NASA Technical Reports Server (NTRS)

    Des Marais, D. J.; Donchin, J. H.; Nehring, N. L.; Truesdell, A. H.

    1981-01-01

    Isotopic measurements of individual geothermal hydrocarbons that are, as a group, of higher molecular weight than methane are reported. It is believed in light of this data that the principal source of hydrocarbons in four geothermal areas in western North America is the thermal decomposition of sedimentary or groundwater organic matter.

  17. Radiation noise of the bearing applied to the ceramic motorized spindle based on the sub-source decomposition method

    NASA Astrophysics Data System (ADS)

    Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.

    2017-12-01

    This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.

  18. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  19. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  20. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less

  1. Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.

    PubMed

    Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc

    2018-08-01

    Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Warming and Nitrogen Addition Increase Litter Decomposition in a Temperate Meadow Ecosystem

    PubMed Central

    Gong, Shiwei; Guo, Rui; Zhang, Tao; Guo, Jixun

    2015-01-01

    Background Litter decomposition greatly influences soil structure, nutrient content and carbon sequestration, but how litter decomposition is affected by climate change is still not well understood. Methodology/Principal Findings A field experiment with increased temperature and nitrogen (N) addition was established in April 2007 to examine the effects of experimental warming, N addition and their interaction on litter decomposition in a temperate meadow steppe in northeastern China. Warming, N addition and warming plus N addition reduced the residual mass of L. chinensis litter by 3.78%, 7.51% and 4.53%, respectively, in 2008 and 2009, and by 4.73%, 24.08% and 16.1%, respectively, in 2010. Warming, N addition and warming plus N addition had no effect on the decomposition of P. communis litter in 2008 or 2009, but reduced the residual litter mass by 5.58%, 15.53% and 5.17%, respectively, in 2010. Warming and N addition reduced the cellulose percentage of L. chinensis and P. communis, specifically in 2010. The lignin percentage of L. chinensis and P. communis was reduced by warming but increased by N addition. The C, N and P contents of L. chinensis and P. communis litter increased with time. Warming and N addition reduced the C content and C:N ratios of L. chinensisand P. communis litter, but increased the N and P contents. Significant interactive effects of warming and N addition on litter decomposition were observed (P<0.01). Conclusion/Significance The litter decomposition rate was highly correlated with soil temperature, soil water content and litter quality. Warming and N addition significantly impacted the litter decomposition rate in the Songnen meadow ecosystem, and the effects of warming and N addition on litter decomposition were also influenced by the quality of litter. These results highlight how climate change could alter grassland ecosystem carbon, nitrogen and phosphorus contents in soil by influencing litter decomposition. PMID:25774776

  3. Directly reconstructing principal components of heterogeneous particles from cryo-EM images.

    PubMed

    Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali

    2015-08-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Exploring Western and Eastern Pacific contributions to the 21st century Walker circulation intensification and teleconnected precipitation declines (Invited)

    NASA Astrophysics Data System (ADS)

    Funk, C. C.; Hoerling, M. P.; Hoell, A.; Verdin, J. P.; Robertson, F. R.; Alured, D.; Liebmann, B.

    2013-12-01

    As the earth's population, industry, and agricultural systems continue to expand and increase demand for limited hydrologic resources, developing better tools for monitoring, analyzing and perhaps even predicting decadal variations in precipitation will enable the climate community to better inform important policy and management decisions. To this end, in support of the development and humanitarian relief efforts of the US Agency for International Development, USGS, NOAA, UC Santa Barbara, and NASA scientists have been exploring global precipitation trends using observations and new ensembles of atmospheric general circulation model (AGCM) simulations from the ECHAM5, GFSv2, CAM4 and GMAO models. This talk summarizes this work, and discusses how combined analyses of AGCM simulations and observations might lead to credible decadal projections, for some regions and seasons, based on the strength of the Indo-Pacific warming signal. Focusing on the late boreal spring, a critical period for food insecure Africa, we begin by linearly decomposing 1900-2012 sea surface temperatures (SST) into components loading strongly in the Indo-Western Pacific and Eastern Pacific. Eastern Pacific (EP) SST variations are based on regressions with three time series: the first and second principal components of equatorial Pacific SST and the Pacific Decadal Oscillation. These influences are removed from Indo-Pacific SSTs, and the Indo-Western Pacific (IWP) SST variations are defined by the 1st principal component of the residuals, which we refer to as the Indo-West Pacific Warming Signal (IWPWS). The pattern of IWPWS SST changes resembles recent assessments of centennial warming, and identifies rapid warming in the equatorial western Pacific and north and south Pacific convergence zones. The circulation impacts of IWP and EP SST forcing are explored in two ways. First, assuming linear SST forcing relationships, IWP and EP decompositions of ECHAM5, GFS, CAM4 and GMAO AGCM simulations are presented. These results suggest that a substantial component of the recent Walker circulation intensification has been related to the IWPWS. The IWPWS warming extends from just north of Papua New Guinea to just west of Hawaii, and appears associated with SLP, wind and rainfall responses consistent with enhanced Indo-Pacific convection. These decomposition results are compared with a set of numerical simulation experiments based on the ECHAM5 and GFS models forced with characteristic IWP and EP SST for 1983-1996 and 1999-2012. The talk concludes with a tentative discussion of the decadal predictability associated with the IWPWS. Using both observed and model-simulated precipitation, we briefly explore potential IWPWS drought teleconnection regions in the Americas, Asia, Middle East, and Eastern Africa. Figure 1. Western Pacific and Eastern Pacific SST changes between 1999-2012 and 1983-1996. Figure 2. Western Pacific and Eastern Pacific GPCP precipitation changes between 1999-2012 and 1983-1996.

  5. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  6. Marine environmental protection: An application of the nanometer photo catalyst method on decomposition of benzene.

    PubMed

    Lin, Mu-Chien; Kao, Jui-Chung

    2016-04-15

    Bioremediation is currently extensively employed in the elimination of coastal oil pollution, but it is not very effective as the process takes several months to degrade oil. Among the components of oil, benzene degradation is difficult due to its stable characteristics. This paper describes an experimental study on the decomposition of benzene by titanium dioxide (TiO2) nanometer photocatalysis. The photocatalyst is illuminated with 360-nm ultraviolet light for generation of peroxide ions. This results in complete decomposition of benzene, thus yielding CO2 and H2O. In this study, a nonwoven fabric is coated with the photocatalyst and benzene. Using the Double-Shot Py-GC system on the residual component, complete decomposition of the benzene was verified by 4h of exposure to ultraviolet light. The method proposed in this study can be directly applied to elimination of marine oil pollution. Further studies will be conducted on coastal oil pollution in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule... management plan. (c) Operator training and qualification. (d) Emission limitations and operating limits. (e...

  8. 40 CFR 60.2570 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c...

  9. Free energy landscape of a biomolecule in dihedral principal component space: sampling convergence and correspondence between structures and minima.

    PubMed

    Maisuradze, Gia G; Leitner, David M

    2007-05-15

    Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.

  10. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  11. Insights from a refined decomposition of cloud feedbacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelinka, Mark D.; Zhou, Chen; Klein, Stephen A.

    Decomposing cloud feedback into components due to changes in several gross cloud properties provides valuable insights into its physical causes. Here we present a refined decomposition that separately considers changes in free tropospheric and low cloud properties, better connecting feedbacks to individual governing processes and avoiding ambiguities present in a commonly used decomposition. It reveals that three net cloud feedback components are robustly nonzero: positive feedbacks from increasing free tropospheric cloud altitude and decreasing low cloud cover and a negative feedback from increasing low cloud optical depth. Low cloud amount feedback is the dominant contributor to spread in net cloudmore » feedback but its anticorrelation with other components damps overall spread. Furthermore, the ensemble mean free tropospheric cloud altitude feedback is roughly 60% as large as the standard cloud altitude feedback because it avoids aliasing in low cloud reductions. Implications for the “null hypothesis” climate sensitivity from well-understood and robustly simulated feedbacks are discussed.« less

  12. Insights from a refined decomposition of cloud feedbacks

    DOE PAGES

    Zelinka, Mark D.; Zhou, Chen; Klein, Stephen A.

    2016-09-05

    Decomposing cloud feedback into components due to changes in several gross cloud properties provides valuable insights into its physical causes. Here we present a refined decomposition that separately considers changes in free tropospheric and low cloud properties, better connecting feedbacks to individual governing processes and avoiding ambiguities present in a commonly used decomposition. It reveals that three net cloud feedback components are robustly nonzero: positive feedbacks from increasing free tropospheric cloud altitude and decreasing low cloud cover and a negative feedback from increasing low cloud optical depth. Low cloud amount feedback is the dominant contributor to spread in net cloudmore » feedback but its anticorrelation with other components damps overall spread. Furthermore, the ensemble mean free tropospheric cloud altitude feedback is roughly 60% as large as the standard cloud altitude feedback because it avoids aliasing in low cloud reductions. Implications for the “null hypothesis” climate sensitivity from well-understood and robustly simulated feedbacks are discussed.« less

  13. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition.

    PubMed

    Norman-Haignere, Sam; Kanwisher, Nancy G; McDermott, Josh H

    2015-12-16

    The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Neural correlates of multimodal metaphor comprehension: Evidence from event-related potentials and time-frequency decompositions.

    PubMed

    Ma, Qingguo; Hu, Linfeng; Xiao, Can; Bian, Jun; Jin, Jia; Wang, Qiuzhen

    2016-11-01

    The present study examined the event-related potential (ERP) and time-frequency components correlates with the comprehension process of multimodal metaphors that were represented by the combination of "a vehicle picture+a written word of an animal". Electroencephalogram data were recorded when participants decided whether the metaphor using an animal word for the vehicle rendered by a picture was appropriate or not. There were two conditions: appropriateness (e.g., sport utility vehicles+tiger) vs. inappropriateness (e.g., sport utility vehicles+cat). The ERP results showed that inappropriate metaphor elicited larger N300 (280-360ms) and N400 (380-460ms) amplitude than appropriate one, which were different from previous exclusively verbal metaphor studies that rarely observed the N300 effect. A P600 (550-750ms) was also observed and larger in appropriate metaphor condition. Besides, the time-frequency principal component analysis revealed that two independent theta activities indexed the separable processes (retrieval of semantic features and semantic integration) underlying the N300 and N400. Delta band was also induced within a later time window and best characterized the integration process underlying P600. These results indicate the specific cognitive mechanism of multimodal metaphor comprehension that is different from verbal metaphor processing, mirrored by several separable processes indexed by ERP components and time-frequency components. The present study extends the metaphor research by uncovering the functional roles of delta and theta as well as their unique contributions to the ERP components during multimodal metaphor comprehension. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. XCOM intrinsic dimensionality for low-Z elements at diagnostic energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornefalk, Hans

    2012-02-15

    Purpose: To determine the intrinsic dimensionality of linear attenuation coefficients (LACs) from XCOM for elements with low atomic number (Z = 1-20) at diagnostic x-ray energies (25-120 keV). H{sub 0}{sup q}, the hypothesis that the space of LACs is spanned by q bases, is tested for various q-values. Methods: Principal component analysis is first applied and the LACs are projected onto the first q principal component bases. The residuals of the model values vs XCOM data are determined for all energies and atomic numbers. Heteroscedasticity invalidates the prerequisite of i.i.d. errors necessary for bootstrapping residuals. Instead wild bootstrap is applied,more » which, by not mixing residuals, allows the effect of the non-i.i.d residuals to be reflected in the result. Credible regions for the eigenvalues of the correlation matrix for the bootstrapped LAC data are determined. If subsequent credible regions for the eigenvalues overlap, the corresponding principal component is not considered to represent true data structure but noise. If this happens for eigenvalues l and l + 1, for any l{<=}q, H{sub 0}{sup q} is rejected. Results: The largest value of q for which H{sub 0}{sup q} is nonrejectable at the 5%-level is q = 4. This indicates that the statistically significant intrinsic dimensionality of low-Z XCOM data at diagnostic energies is four. Conclusions: The method presented allows determination of the statistically significant dimensionality of any noisy linear subspace. Knowledge of such significant dimensionality is of interest for any method making assumptions on intrinsic dimensionality and evaluating results on noisy reference data. For LACs, knowledge of the low-Z dimensionality might be relevant when parametrization schemes are tuned to XCOM data. For x-ray imaging techniques based on the basis decomposition method (Alvarez and Macovski, Phys. Med. Biol. 21, 733-744, 1976), an underlying dimensionality of two is commonly assigned to the LAC of human tissue at diagnostic energies. The finding of a higher statistically significant dimensionality thus raises the question whether a higher assumed model dimensionality (now feasible with the advent of multibin x-ray systems) might also be practically relevant, i.e., if better tissue characterization results can be obtained.« less

  16. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  17. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  18. Analysing Institutions Interdisciplinarity by Extensive Use of Rao-Stirling Diversity Index.

    PubMed

    Cassi, Lorenzo; Champeimont, Raphaël; Mescheba, Wilfriedo; de Turckheim, Élisabeth

    2017-01-01

    This paper shows how the Rao-Stirling diversity index may be extensively used for positioning and comparing institutions interdisciplinary practices. Two decompositions of this index make it possible to explore different components of the diversity of the cited references in a corpus of publications. The paper aims at demonstrating how these bibliometric tools can be used for comparing institutions in a research field by highlighting collaboration orientations and institutions strategies. To make the method available and easy to use for indicator users, this paper first recalls a previous result on the decomposition of the Rao-Stirling index into multidisciplinarity and interdisciplinarity components, then proposes a new decomposition to further explore the profile of research collaborations and finally presents an application to Neuroscience research in French universities.

  19. Analysing Institutions Interdisciplinarity by Extensive Use of Rao-Stirling Diversity Index

    PubMed Central

    Cassi, Lorenzo; Champeimont, Raphaël; Mescheba, Wilfriedo

    2017-01-01

    This paper shows how the Rao-Stirling diversity index may be extensively used for positioning and comparing institutions interdisciplinary practices. Two decompositions of this index make it possible to explore different components of the diversity of the cited references in a corpus of publications. The paper aims at demonstrating how these bibliometric tools can be used for comparing institutions in a research field by highlighting collaboration orientations and institutions strategies. To make the method available and easy to use for indicator users, this paper first recalls a previous result on the decomposition of the Rao-Stirling index into multidisciplinarity and interdisciplinarity components, then proposes a new decomposition to further explore the profile of research collaborations and finally presents an application to Neuroscience research in French universities. PMID:28114382

  20. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  1. Principal Workload: Components, Determinants and Coping Strategies in an Era of Standardization and Accountability

    ERIC Educational Resources Information Center

    Oplatka, Izhar

    2017-01-01

    Purpose: In order to fill the gap in theoretical and empirical knowledge about the characteristics of principal workload, the purpose of this paper is to explore the components of principal workload as well as its determinants and the coping strategies commonly used by principals to face this personal state. Design/methodology/approach:…

  2. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang

    2017-03-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.

  3. Signal processing method and system for noise removal and signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren

    2009-04-14

    A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.

  4. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  5. Graph Frequency Analysis of Brain Signals

    PubMed Central

    Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro

    2016-01-01

    This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325

  6. Single and two-shot quantitative phase imaging using Hilbert-Huang Transform based fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos

    2016-08-01

    In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.

  7. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  8. Effects of elevated CO2 and temperature on forest floor litter decomposition and chemistry

    EPA Science Inventory

    Forest floor can be a major component of the carbon held in forested soils. In mature forests it represents the balance between additions and decomposition under current climate conditions. Because of its position at the soil surface, this reservoir of C is highly susceptible...

  9. ESTIMATION OF INHERENT OPTICAL PROPERTIES AND WATER CONSTITUENT CONCENTRATIONS FROM THE REMOTE-SENSING REFLECTANCE SPECTRA IN THE ALBEMARLE-PAMLICO ESTUARY, USA

    EPA Science Inventory

    The decomposition of remote sensing reflectance (RSR) spectra into absorption, scattering and backscattering coefficients, and scattering phase function is an important issue for estimating water quality (WQ) components. For Case 1 waters RSR decomposition can be easily accompli...

  10. Decomposing economic disparities in risky sexual behaviors among people who inject drugs in Tehran: Blinder-Oaxaca decomposition analysis.

    PubMed

    Noroozi, Mehdi; Sharifi, Hamid; Noroozi, Alireza; Rezaei, Fatemah; Bazrafshan, Mohammad Rafi; Armoon, Bahram

    2017-01-01

    To our knowledge, no previous study has systematically assessed the role of economic status in risky sexual behavior among people who inject drugs (PWID) in Iran. In this study, we used Blinder-Oaxaca (BO) decomposition to explore the contribution of economic status to inequality in unprotected sex among PWID in Tehran and to decompose it into its determinants. Behavioral surveys among PWID were conducted in Tehran, the capital city of Iran, from November 2016 to April 2017. We employed a cross-sectional design and snowball sampling methodology. We constructed the asset index (weighted by the first principal component analysis factor) using socioeconomic data and then divided the variable into 3 tertiles. We used the BO method to decompose the economic inequality in unprotected sex. Of the 520 recruited individuals, 20 were missing data for variables used to define their economic status, and were therefore excluded from the analysis. Not having access to harm reduction programs was the largest factor contributing to the economic disparity in unprotected sex, accounting for 5.5 percentage points of the 21.4% discrepancy. Of the unadjusted total economic disparity in unprotected sex, 52% was unexplained by observable characteristics included in the regression model. The difference in the prevalence of unprotected sex between the high-income and low-income groups was 25%. Increasing needle syringe program coverage and improving human immunodeficiency virus (HIV) knowledge are essential for efforts to eliminate inequalities in HIV risk behaviors among PWID.

  11. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts

    PubMed Central

    PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS

    2017-01-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876

  12. Effects of water flow regulation on ecosystem functioning in a Mediterranean river network assessed by wood decomposition.

    PubMed

    Abril, Meritxell; Muñoz, Isabel; Casas-Ruiz, Joan P; Gómez-Gener, Lluís; Barceló, Milagros; Oliva, Francesc; Menéndez, Margarita

    2015-06-01

    Mediterranean rivers are extensively modified by flow regulation practises along their courses. An important part of the river impoundment in this area is related to the presence of small dams constructed mainly for water abstraction purposes. These projects drastically modified the ecosystem morphology, transforming lotic into lentic reaches and increasing their alternation along the river. Hydro-morphologial differences between these reaches indicate that flow regulation can trigger important changes in the ecosystem functioning. Decomposition of organic matter is an integrative process and this complexity makes it a good indicator of changes in the ecosystem. The aim of this study was to assess the effect caused by flow regulation on ecosystem functioning at the river network scale, using wood decomposition as a functional indicator. We studied the mass loss from wood sticks during three months in different lotic and lentic reaches located along a Mediterranean river basin, in both winter and summer. Additionally, we identified the environmental factors affecting decomposition rates along the river orders. The results revealed differences in decomposition rates between sites in both seasons that were principally related to the differences between stream orders. The rates were mainly related to temperature, nutrient concentrations (NO2(-), NO3(2-)) and water residence time. High-order streams with higher temperature and nutrient concentrations exhibited higher decomposition rates compared with low-order streams. The effect of the flow regulation on the decomposition rates only appeared to be significant in high orders, especially in winter, when the hydrological characteristics of lotic and lentic habitats widely varied. Lotic reaches with lower water residence time exhibited greater decomposition rates compared with lentic reaches probably due to more physical abrasion and differences in the microbial assemblages. Overall, our study revealed that in high orders the reduction of flow caused by flow regulation affects the wood decomposition indicating changes in ecosystem functioning. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  14. Data analysis using a combination of independent component analysis and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Lin; Tung, Pi-Cheng; Huang, Norden E.

    2009-06-01

    A combination of independent component analysis and empirical mode decomposition (ICA-EMD) is proposed in this paper to analyze low signal-to-noise ratio data. The advantages of ICA-EMD combination are these: ICA needs few sensory clues to separate the original source from unwanted noise and EMD can effectively separate the data into its constituting parts. The case studies reported here involve original sources contaminated by white Gaussian noise. The simulation results show that the ICA-EMD combination is an effective data analysis tool.

  15. The Influence Function of Principal Component Analysis by Self-Organizing Rule.

    PubMed

    Higuchi; Eguchi

    1998-07-28

    This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.

  16. Decomposition of hydroxy amino acids in foraminiferal tests; kinetics, mechanism and geochronological implications

    USGS Publications Warehouse

    Bada, J.L.; Shou, M.-Y.; Man, E.H.; Schroeder, R.A.

    1978-01-01

    The diagenesis of the hydroxy amino acids serine and threonine in foraminiferal tests has been investigated. The decomposition pathways of these amino acids are complex; the principal reactions appear to be dehydration, aldol cleavage and decarboxylation. Stereochemical studies indicate that the ??-amino-n-butyric acid (ABA) detected in foraminiferal tests is the end product of threonine dehydration pathway. Decomposition of serine and threonine in foraminiferal tests from two well-dated Caribbean deep-sea cores, P6304-8 and -9, has been found to follow irreversible first-order kinetics. Three empirical equations were derived for the disappearance of serine and threonine and the appearance of ABA. These equations can be used as a new geochronological method for dating foraminiferal tests from other deep-sea sediments. Preliminary results suggest that ages deduced from the ABA kinetics equation are most reliable because "species effect" and contamination problems are not important for this nonbiological amino acid. Because of the variable serine and threonine contents of modern foraminiferal species, it is likely that the accurate age estimates can be obtained from the serine and threonine decomposition equations only if a homogeneous species assemblage or single species sample isolated from mixed natural assemblages is used. ?? 1978.

  17. Fourier decomposition of payoff matrix for symmetric three-strategy games.

    PubMed

    Szabó, György; Bodó, Kinga S; Allen, Benjamin; Nowak, Martin A

    2014-10-01

    In spatial evolutionary games the payoff matrices are used to describe pair interactions among neighboring players located on a lattice. Now we introduce a way how the payoff matrices can be built up as a sum of payoff components reflecting basic symmetries. For the two-strategy games this decomposition reproduces interactions characteristic to the Ising model. For the three-strategy symmetric games the Fourier components can be classified into four types representing games with self-dependent and cross-dependent payoffs, variants of three-strategy coordinations, and the rock-scissors-paper (RSP) game. In the absence of the RSP component the game is a potential game. The resultant potential matrix has been evaluated. The general features of these systems are analyzed when the game is expressed by the linear combinations of these components.

  18. Use of principal-component, correlation, and stepwise multiple-regression analyses to investigate selected physical and hydraulic properties of carbonate-rock aquifers

    USGS Publications Warehouse

    Brown, C. Erwin

    1993-01-01

    Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.

  19. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  20. Genetic algorithm applied to the selection of factors in principal component-artificial neural networks: application to QSAR study of calcium channel antagonist activity of 1,4-dihydropyridines (nifedipine analogous).

    PubMed

    Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba

    2003-01-01

    A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.

  1. Dynamic correlations at different time-scales with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, T.; Aste, Tomaso

    2018-07-01

    We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.

  2. [The intensity of phytodetrite decomposition in Larch Forest of the permafrost zone in central Siberia].

    PubMed

    Prokushkin, S G; Prokushkin, A S; Sorokin, N D

    2014-01-01

    Based on the results of long-term investigations, quantitative assessment ofphytodetrite mineralization rates is provided. Their role in the biological cycle of larch stands growing in the permafrost zone of Central Evenkia is discussed. It is demonstrated that their destruction in the subshrub-sphagnum and cowberry-green moss larch stands is extremely slow, the plant litter contains the most cecalcitrant organic matter demonstrating the lowest decomposition coefficient of 0.03-0.04 year(-1), whereas fresh components of the plant litter have 3- to 4-fold higher values. An insignificant input of N and C from the analyzed mortmass to the soil has been registered. It has been revealed that the changes in N and C in the decomposition components are closely related to the quantitative dynamics (biomass) of microorganisms, such as hydrolytics and, especially, micromicetes.

  3. Revisiting the concept of recalcitrance and organic matter persistence in soils and aquatic systems: Does environment trump chemistry?

    NASA Astrophysics Data System (ADS)

    Marin-Spiotta, E.

    2014-12-01

    Most ecological models of decomposition rely on plant litter chemistry. However, growing evidence suggests that the chemical composition of organic matter (OM) is not a good predictor of its eventual fate in terrestrial or aquatic environments. New data on variable decomposition rates of select organic compounds challenge concepts of chemical recalcitrance, i.e. the inherent ability of certain molecular structures to resist biodegradation. The role of environmental or "ecosystem" properties on influencing decomposition dates back to some of the earliest research on soil OM. Despite early recognition that the physical and aqueous matrices are critical in determining the fate of organic compounds, the prevailing paradigm hinges on intrinsic chemical properties as principal predictors of decay rate. Here I build upon recent reviews and discuss new findings that contribute to three major transformations in our understanding of OM persistence: (1) a shift away from an emphasis on chemical recalcitrance as a primary predictor of turnover, (2) new interpretations of radiocarbon ages which challenge predictions of reactivity, and (3) the recognition that most detrital OM accumulating in soils and in water has been microbially processed. Predictions of OM persistence due to aromaticity are challenged by high variability in lignin and black C turnover observed in terrestrial and aquatic environments. Contradictions in the behavior of lignin are, in part, influenced by inconsistent methodologies among research communities. Even black C, long considered to be one of the most recalcitrant components of OM, is susceptible to biodegradation, challenging predictions of the stability of aromatic structures. At the same time, revised interpretations of radiocarbon data suggest that organic compounds can acquire long mean residence times by various mechanisms independent of their molecular structure. Understanding interactions between environmental conditions and biological reactivity can improve predictions of how disturbance events can further stabilize or destabilize organic C pools, with implications for terrestrial C storage, aquatic C cycling, and climate change.

  4. Thermal Decomposition of Calcium Perchlorate/Iron-Mineral Mixtures: Implications of the Evolved Oxygen from the Rocknest Eolian Deposit in Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    Bruck, A. M.; Sutter, B.; Ming, D. W.; Mahaffy, P.

    2014-01-01

    A major oxygen release between 300 and 500 C was detected by the Mars Curiosity Rover Sample Analysis at Mars (SAM) instrument at the Rocknest eolian deposit. Thermal decomposition of perchlorate (ClO4-) salts in the Rocknest samples are a possible explanation for this evolved oxygen release. Releative to Na-, K-, Mg-, and Fe-perchlorate, the thermal decomposition of Ca-perchlorate in laboratory experiments released O2 in the temperature range (400-500degC) closest to the O2 release temperatures observed for the Rocknest material. Furthermore, calcium perchlorate could have been the source of Cl in the chlorinated-hydrocarbons species that were detected by SAM. Different components in the Martian soil could affect the decomposition temperature of calcium per-chlorate or another oxychlorine species. This interaction of the two components in the soil could result in O2 release temperatures consistent with those detected by SAM in the Rocknest materials. The decomposition temperatures of various alkali metal perchlorates are known to decrease in the presence of a catalyst. The objective of this work is to investigate catalytic interactions on calcium perchlorate from various iron-bearing minerals known to be present in the Rocknest material

  5. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  6. 40 CFR 62.14505 - What are the principal components of this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What are the principal components of this subpart? 62.14505 Section 62.14505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... components of this subpart? This subpart contains the eleven major components listed in paragraphs (a...

  7. Gibbsian Stationary Non-equilibrium States

    NASA Astrophysics Data System (ADS)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-09-01

    We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.

  8. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  9. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  10. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    NASA Astrophysics Data System (ADS)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  11. Generalized Cahn-Hilliard equation for solutions with drastically different diffusion coefficients. Application to exsolution in ternary feldspar

    NASA Astrophysics Data System (ADS)

    Petrishcheva, E.; Abart, R.

    2012-04-01

    We address mathematical modeling and computer simulations of phase decomposition in a multicomponent system. As opposed to binary alloys with one common diffusion parameter, our main concern is phase decomposition in real geological systems under influence of strongly different interdiffusion coefficients, as it is frequently encountered in mineral solid solutions with coupled diffusion on different sub-lattices. Our goal is to explain deviations from equilibrium element partitioning which are often observed in nature, e.g., in a cooled ternary feldspar. To this end we first adopt the standard Cahn-Hilliard model to the multicomponent diffusion problem and account for arbitrary diffusion coefficients. This is done by using Onsager's approach such that flux of each component results from the combined action of chemical potentials of all components. In a second step the generalized Cahn-Hilliard equation is solved numerically using finite-elements approach. We introduce and investigate several decomposition scenarios that may produce systematic deviations from the equilibrium element partitioning. Both ideal solutions and ternary feldspar are considered. Typically, the slowest component is initially "frozen" and the decomposition effectively takes place only for two "fast" components. At this stage the deviations from the equilibrium element partitioning are indeed observed. These deviations may became "frozen" under conditions of cooling. The final equilibration of the system occurs on a considerably slower time scale. Therefore the system may indeed remain unaccomplished at the observation point. Our approach reveals the intrinsic reasons for the specific phase separation path and rigorously describes it by direct numerical solution of the generalized Cahn-Hilliard equation.

  12. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  13. 10Be in late deglacial climate simulated by ECHAM5-HAM - Part 2: Isolating the solar signal from 10Be deposition

    NASA Astrophysics Data System (ADS)

    Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.

    2013-10-01

    This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 yr time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 BP and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather or climate driven noise in the 10Be deposition flux during different stages of climate. The production signal varies on lower frequencies, dominated by the 11yr solar cycle within the 30 yr time scale of these experiments. The climatic noise is of higher frequencies. We first apply empirical orthogonal functions (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis on the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low frequency components and the long term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high frequency components represent climate driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.

  14. Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki

    2004-04-01

    We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.

  15. Detection of goal events in soccer videos

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  16. An efficient rhythmic component expression and weighting synthesis strategy for classifying motor imagery EEG in a brain computer interface

    NASA Astrophysics Data System (ADS)

    Wang, Tao; He, Bin

    2004-03-01

    The recognition of mental states during motor imagery tasks is crucial for EEG-based brain computer interface research. We have developed a new algorithm by means of frequency decomposition and weighting synthesis strategy for recognizing imagined right- and left-hand movements. A frequency range from 5 to 25 Hz was divided into 20 band bins for each trial, and the corresponding envelopes of filtered EEG signals for each trial were extracted as a measure of instantaneous power at each frequency band. The dimensionality of the feature space was reduced from 200 (corresponding to 2 s) to 3 by down-sampling of envelopes of the feature signals, and subsequently applying principal component analysis. The linear discriminate analysis algorithm was then used to classify the features, due to its generalization capability. Each frequency band bin was weighted by a function determined according to the classification accuracy during the training process. The present classification algorithm was applied to a dataset of nine human subjects, and achieved a success rate of classification of 90% in training and 77% in testing. The present promising results suggest that the present classification algorithm can be used in initiating a general-purpose mental state recognition based on motor imagery tasks.

  17. Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin

    NASA Astrophysics Data System (ADS)

    Downey, Austin; Laflamme, Simon; Ubertini, Filippo

    2016-12-01

    The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.

  18. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  19. Structure-seeking multilinear methods for the analysis of fMRI data.

    PubMed

    Andersen, Anders H; Rayens, William S

    2004-06-01

    In comprehensive fMRI studies of brain function, the data structures often contain higher-order ways such as trial, task condition, subject, and group in addition to the intrinsic dimensions of time and space. While multivariate bilinear methods such as principal component analysis (PCA) have been used successfully for extracting information about spatial and temporal features in data from a single fMRI run, the need to unfold higher-order data sets into bilinear arrays has led to decompositions that are nonunique and to the loss of multiway linkages and interactions present in the data. These additional dimensions or ways can be retained in multilinear models to produce structures that are unique and which admit interpretations that are neurophysiologically meaningful. Multiway analysis of fMRI data from multiple runs of a bilateral finger-tapping paradigm was performed using the parallel factor (PARAFAC) model. A trilinear model was fitted to a data cube of dimensions voxels by time by run. Similarly, a quadrilinear model was fitted to a higher-way structure of dimensions voxels by time by trial by run. The spatial and temporal response components were extracted and validated by comparison to results from traditional SVD/PCA analyses based on scenarios of unfolding into lower-order bilinear structures.

  20. Principals' Perceptions Regarding Their Supervision and Evaluation

    ERIC Educational Resources Information Center

    Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann

    2015-01-01

    This study examined the perceptions of principals concerning principal evaluation and supervisory feedback. Principals were asked two open-ended questions. Respondents included 82 principals in the Rocky Mountain region. The emerging themes were "Superintendent Performance," "Principal Evaluation Components," "Specific…

  1. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  2. Data-driven signal-resolving approaches of infrared spectra to explore the macroscopic and microscopic spatial distribution of organic and inorganic compounds in plant.

    PubMed

    Chen, Jian-bo; Sun, Su-qin; Zhou, Qun

    2015-07-01

    The nondestructive and label-free infrared (IR) spectroscopy is a direct tool to characterize the spatial distribution of organic and inorganic compounds in plant. Since plant samples are usually complex mixtures, signal-resolving methods are necessary to find the spectral features of compounds of interest in the signal-overlapped IR spectra. In this research, two approaches using existing data-driven signal-resolving methods are proposed to interpret the IR spectra of plant samples. If the number of spectra is small, "tri-step identification" can enhance the spectral resolution to separate and identify the overlapped bands. First, the envelope bands of the original spectrum are interpreted according to the spectra-structure correlations. Then the spectrum is differentiated to resolve the underlying peaks in each envelope band. Finally, two-dimensional correlation spectroscopy is used to enhance the spectral resolution further. For a large number of spectra, "tri-step decomposition" can resolve the spectra by multivariate methods to obtain the structural and semi-quantitative information about the chemical components. Principal component analysis is used first to explore the existing signal types without any prior knowledge. Then the spectra are decomposed by self-modeling curve resolution methods to estimate the spectra and contents of significant chemical components. At last, targeted methods such as partial least squares target can explore the content profiles of specific components sensitively. As an example, the macroscopic and microscopic distribution of eugenol and calcium oxalate in the bud of clove is studied.

  3. [Assessment of the strength of tobacco control on creating smoke-free hospitals using principal components analysis].

    PubMed

    Liu, Hui-lin; Wan, Xia; Yang, Gong-huan

    2013-02-01

    To explore the relationship between the strength of tobacco control and the effectiveness of creating smoke-free hospital, and summarize the main factors that affect the program of creating smoke-free hospitals. A total of 210 hospitals from 7 provinces/municipalities directly under the central government were enrolled in this study using stratified random sampling method. Principle component analysis and regression analysis were conducted to analyze the strength of tobacco control and the effectiveness of creating smoke-free hospitals. Two principal components were extracted in the strength of tobacco control index, which respectively reflected the tobacco control policies and efforts, and the willingness and leadership of hospital managers regarding tobacco control. The regression analysis indicated that only the first principal component was significantly correlated with the progression in creating smoke-free hospital (P<0.001), i.e. hospitals with higher scores on the first principal component had better achievements in smoke-free environment creation. Tobacco control policies and efforts are critical in creating smoke-free hospitals. The principal component analysis provides a comprehensive and objective tool for evaluating the creation of smoke-free hospitals.

  4. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    ERIC Educational Resources Information Center

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  5. Molecular dynamics in principal component space.

    PubMed

    Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L

    2012-07-26

    A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.

  6. Optimized principal component analysis on coronagraphic images of the fomalhaut system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less

  7. Using Singular Value Decomposition to Investigate Degraded Chinese Character Recognition: Evidence from Eye Movements during Reading

    ERIC Educational Resources Information Center

    Wang, Hsueh-Cheng; Schotter, Elizabeth R.; Angele, Bernhard; Yang, Jinmian; Simovici, Dan; Pomplun, Marc; Rayner, Keith

    2013-01-01

    Previous research indicates that removing initial strokes from Chinese characters makes them harder to read than removing final or internal ones. In the present study, we examined the contribution of important components to character configuration via singular value decomposition. The results indicated that when the least important segments, which…

  8. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  9. Extensions to decomposition of the redistributive effect of health care finance.

    PubMed

    Zhong, Hai

    2009-10-01

    The total redistributive effect (RE) of health-care finance has been decomposed into vertical, horizontal and reranking effects. The vertical effect has been further decomposed into tax rate and tax structure effects. We extend this latter decomposition to the horizontal and reranking components of the RE. We also show how to measure the vertical, horizontal and reranking effects of each component of the redistributive system, allowing analysis of the RE of health-care finance in the context of that system. The methods are illustrated with application to the RE of health-care financing in Canada.

  10. [A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].

    PubMed

    Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei

    2010-04-01

    It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.

  11. Changes in mass and nutrient content of wood during decomposition in a south Florida mangrove forest

    USGS Publications Warehouse

    Romero, L.M.; Smith, T. J.; Fourqurean, J.W.

    2005-01-01

    1 Large pools of dead wood in mangrove forests following disturbances such as hurricanes may influence nutrient fluxes. We hypothesized that decomposition of wood of mangroves from Florida, USA (Avicennia germinans, Laguncularia racemosa and Rhizophora mangle), and the consequent nutrient dynamics, would depend on species, location in the forest relative to freshwater and marine influences and whether the wood was standing, lying on the sediment surface or buried. 2 Wood disks (8-10 cm diameter, 1 cm thick) from each species were set to decompose at sites along the Shark River, either buried in the sediment, on the soil surface or in the air (above both the soil surface and high tide elevation). 3 A simple exponential model described the decay of wood in the air, and neither species nor site had any effect on the decay coefficient during the first 13 months of decomposition. 4 Over 28 months of decomposition, buried and surface disks decomposed following a two-component model, with labile and refractory components. Avicennia germinans had the largest labile component (18 ?? 2% of dry weight), while Laguncularia racemosa had the lowest (10 ?? 2%). Labile components decayed at rates of 0.37-23.71% month -1, while refractory components decayed at rates of 0.001-0.033% month-1. Disks decomposing on the soil surface had higher decay rates than buried disks, but both were higher than disks in the air. All species had similar decay rates of the labile and refractory components, but A. germinans exhibited faster overall decay because of a higher proportion of labile components. 5 Nitrogen content generally increased in buried and surface disks, but there was little change in N content of disks in the air over the 2-year study. Between 17% and 68% of total phosphorus in wood leached out during the first 2 months of decomposition, with buried disks having the greater losses, P remaining constant or increasing slightly thereafter. 6 Newly deposited wood from living trees was a short-term source of N for the ecosystem but, by the end of 2 years, had become a net sink. Wood, however, remained a source of P for the ecosystem. 7 As in other forested ecosystems, coarse woody debris can have a significant impact on carbon and nutrient dynamics in mangrove forests. The prevalence of disturbances, such as hurricanes, that can deposit large amounts of wood on the forest floor accentuates the importance of downed wood in these forests. ?? 2005 British Ecological Society.

  12. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  13. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  14. Traits drive global wood decomposition rates more than climate.

    PubMed

    Hu, Zhenhong; Michaletz, Sean T; Johnson, Daniel J; McDowell, Nate G; Huang, Zhiqun; Zhou, Xuhui; Xu, Chonggang

    2018-06-14

    Wood decomposition is a major component of the global carbon cycle. Decomposition rates vary across climate gradients, which is thought to reflect the effects of temperature and moisture on the metabolic kinetics of decomposers. However, decomposition rates also vary with wood traits, which may reflect the influence of stoichiometry on decomposer metabolism as well as geometry relating the surface areas that decomposers colonize with the volumes they consume. In this paper, we combined metabolic and geometric scaling theories to formalize hypotheses regarding the drivers of wood decomposition rates, and assessed these hypotheses using a global compilation of data on climate, wood traits, and wood decomposition rates. Our results are consistent with predictions from both metabolic and geometric scaling theories. Approximately half of the global variation in decomposition rates was explained by wood traits (nitrogen content and diameter), while only a fifth was explained by climate variables (air temperature, precipitation, and relative humidity). These results indicate that global variation in wood decomposition rates is best explained by stoichiometric and geometric wood traits. Our findings suggest that inclusion of wood traits in global carbon cycle models can improve predictions of carbon fluxes from wood decomposition. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. The Natural Helmholtz-Hodge Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatia, H.

    nHHD is a C++ library to decompose a flow field into three components exhibiting specific types of behaviors. These components allow more targeted analysis of flow behavior and can be applied to a variety of application areas.

  16. A global experiment suggests climate warming will not accelerate litter decomposition in streams but might reduce carbon sequestration.

    PubMed

    Boyero, Luz; Pearson, Richard G; Gessner, Mark O; Barmuta, Leon A; Ferreira, Verónica; Graça, Manuel A S; Dudgeon, David; Boulton, Andrew J; Callisto, Marcos; Chauvet, Eric; Helson, Julie E; Bruder, Andreas; Albariño, Ricardo J; Yule, Catherine M; Arunachalam, Muthukumarasamy; Davies, Judy N; Figueroa, Ricardo; Flecker, Alexander S; Ramírez, Alonso; Death, Russell G; Iwata, Tomoya; Mathooko, Jude M; Mathuriau, Catherine; Gonçalves, José F; Moretti, Marcelo S; Jinggut, Tajang; Lamothe, Sylvain; M'Erimba, Charles; Ratnarajah, Lavenia; Schindler, Markus H; Castela, José; Buria, Leonardo M; Cornejo, Aydeé; Villanueva, Verónica D; West, Derek C

    2011-03-01

    The decomposition of plant litter is one of the most important ecosystem processes in the biosphere and is particularly sensitive to climate warming. Aquatic ecosystems are well suited to studying warming effects on decomposition because the otherwise confounding influence of moisture is constant. By using a latitudinal temperature gradient in an unprecedented global experiment in streams, we found that climate warming will likely hasten microbial litter decomposition and produce an equivalent decline in detritivore-mediated decomposition rates. As a result, overall decomposition rates should remain unchanged. Nevertheless, the process would be profoundly altered, because the shift in importance from detritivores to microbes in warm climates would likely increase CO(2) production and decrease the generation and sequestration of recalcitrant organic particles. In view of recent estimates showing that inland waters are a significant component of the global carbon cycle, this implies consequences for global biogeochemistry and a possible positive climate feedback. © 2011 Blackwell Publishing Ltd/CNRS.

  17. How multi segmental patterns deviate in spastic diplegia from typical developed.

    PubMed

    Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela

    2017-10-01

    The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A reduction in ag/residential signature conflict using principal components analysis of LANDSAT temporal data

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Borden, F. Y.

    1977-01-01

    Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.

  19. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  20. Molecular carbon isotopic evidence for the origin of geothermal hydrocarbons

    USGS Publications Warehouse

    Des Marais, D.J.; Donchin, J.H.; Nehring, N.L.; Truesdell, A.H.

    1981-01-01

    Previous interest in light hydrocarbons from geothermal systems has focused principally on the origin of the methane1 and the estimation of subsurface temperatures from the carbon isotopic content of coexisting methane and carbon dioxide1-3. Higher molecular weight hydrocarbons were first reported in gases from Yellowstone National Park4, and have since been found to occur commonly in geothermal emanations in the western United States5. Isotopic measurements of individual geothermal hydrocarbons are now reported which help to explain the origin of these hydrocarbons. The thermal decomposition of sedimentary or groundwater organic matter is a principal source of hydrocarbons in four geothermal areas in western North America. ?? 1981 Nature Publishing Group.

  1. Research on technology of online gas chromatograph for SF6 decomposition products

    NASA Astrophysics Data System (ADS)

    Li, L.; Fan, X. P.; Zhou, Y. Y.; Tang, N.; Zou, Z. L.; Liu, M. Z.; Huang, G. J.

    2017-12-01

    Sulfur hexafluoride (SF6) decomposition products were qualitatively and quantitatively analyzed by several gas chromatographs in the laboratory. Test conditions and methods were selected and optimized to minimize and eliminate the SF6’ influences on detection of other trace components. The effective separation and detection of selected characteristic gases were achieved. And by comparison among different types of gas chromatograph, it was found that GPTR-S101 can effectively separate and detect SF6 decomposition products and has best the best detection limit and sensitivity. On the basis of GPTR-S101, online gas chromatograph for SF6decomposition products (GPTR-S201) was developed. It lays the foundation for further online monitoring and diagnosis of SF6.

  2. A measure for objects clustering in principal component analysis biplot: A case study in inter-city buses maintenance cost data

    NASA Astrophysics Data System (ADS)

    Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.

    2017-03-01

    This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.

  3. Survey to Identify Substandard and Falsified Tablets in Several Asian Countries with Pharmacopeial Quality Control Tests and Principal Component Analysis of Handheld Raman Spectroscopy.

    PubMed

    Kakio, Tomoko; Nagase, Hitomi; Takaoka, Takashi; Yoshida, Naoko; Hirakawa, Junichi; Macha, Susan; Hiroshima, Takashi; Ikeda, Yukihiro; Tsuboi, Hirohito; Kimura, Kazuko

    2018-06-01

    The World Health Organization has warned that substandard and falsified medical products (SFs) can harm patients and fail to treat the diseases for which they were intended, and they affect every region of the world, leading to loss of confidence in medicines, health-care providers, and health systems. Therefore, development of analytical procedures to detect SFs is extremely important. In this study, we investigated the quality of pharmaceutical tablets containing the antihypertensive candesartan cilexetil, collected in China, Indonesia, Japan, and Myanmar, using the Japanese pharmacopeial analytical procedures for quality control, together with principal component analysis (PCA) of Raman spectrum obtained with handheld Raman spectrometer. Some samples showed delayed dissolution and failed to meet the pharmacopeial specification, whereas others failed the assay test. These products appeared to be substandard. Principal component analysis showed that all Raman spectra could be explained in terms of two components: the amount of the active pharmaceutical ingredient and the kinds of excipients. Principal component analysis score plot indicated one substandard, and the falsified tablets have similar principal components in Raman spectra, in contrast to authentic products. The locations of samples within the PCA score plot varied according to the source country, suggesting that manufacturers in different countries use different excipients. Our results indicate that the handheld Raman device will be useful for detection of SFs in the field. Principal component analysis of that Raman data clarify the difference in chemical properties between good quality products and SFs that circulate in the Asian market.

  4. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  5. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  6. A hybrid model for PM₂.₅ forecasting based on ensemble empirical mode decomposition and a general regression neural network.

    PubMed

    Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling

    2014-10-15

    Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Socioeconomic Inequalities in Nonuse of Seatbelts in Cars and Helmets on Motorcycles among People Living in Kurdistan Province, Iran.

    PubMed

    Moradi, Ghobad; Malekafzali Ardakani, Hossein; Majdzadeh, Reza; Bidarpour, Farzam; Mohammad, Kazem; Holakouie-Naieni, Kourosh

    2014-09-01

    The aim of this study was to determine the socioeconomic inequalities in nonuse of seatbelts in cars and helmets on motorcycles in Kurdistan Province, west of Iran, 2009. The data used in this study was collected from the data gathered in non-communicable disease surveillance system (NCDSS) in 2009 in Kurdistan. A total of 1000 people were included in this study. The outcome variable of this study was the nonuse of seatbelts and helmets. The socio-economic status (SES) was calculated based on participants' residential area and assets using Principal Component Analysis (PCA) method. The concentration index, concentration curve, and comparison of Odds Ratio (OR) in different SES groups were used to measure the socioeconomic inequalities using logistic regression. In order to determine the contribution of determinants of inequality, decomposition analysis was used. The prevalence of nonuse of seatbelts in cars and helmets on motorcycles were 47.5%, 95%CI [44%, 55%], respectively. The Concentration index was -0.097, CI [-0.148, -0.046]. The OR of nonuse of seatbelts in cars and helmets on motorcycles in the richest group compared with the poorest group was 0.39, 95%CI [0.23, 0.68]. The results of the decomposition analysis showed that 34% of inequalities were due to SES, 47% were due to residential area, and 12% were due to unknown factors. There is a reverse association between SES and nonuse of seatbelts in cars and helmets on motorcycles. This issue must be considered while planning to reduce traffic accidents injuries.

  8. Is the status of diabetes socioeconomic inequality changing in Kurdistan province, west of Iran? A comparison of two surveys.

    PubMed

    Moradi, Ghobad; Majdzadeh, Reza; Mohammad, Kazem; Malekafzali, Hossein; Jafari, Saeede; Holakouie-Naieni, Kourosh

    2016-01-01

    About 80% of deaths in 350 million cases of diabetes in the world occur in low and middle income countries. The aim of this study was to determine the status of diabetes socioeconomic inequality and the share of determinants of inequalities in Kurdistan Province, West of Iran, using two surveys in 2005 and 2009. Data were collected from non-communicable disease surveillance surveys in Kurdistan in 2005 and 2009. In this study, the socioeconomic status (SES) of the participants was determined based on the residential area and assets using principal component analysis statistical method. We used concentration index and logistic regression to determine inequality. Decomposition analysis was used to determine the share of each determinant of inequality. The prevalence of diabetes expressed by individuals changed from 0.9% (95% CI: 0.6-1.3) in 2005 to 3.1% (95% CI: 2-4) in 2009. Diabetes Concentration Index changed from -0.163 (95% CI: -0.301- -0.024) in 2005 to 0.273 (95% CI: 0.101-0.445) in 2009. The results of decomposition analysis revealed that in 2009, 67% of the inequality was due to low socioeconomic status and 16% to area of residence; i.e., living in rural areas. The prevalence of diabetes significantly increased, and the diabetes inequality shifted from the poor people to groups with better SES. Increased prevalence of diabetes among the high SES individuals may be due to their better responses to diabetes control and awareness programs or due to the type of services they were provided during these years.

  9. Socioeconomic Inequalities in Nonuse of Seatbelts in Cars and Helmets on Motorcycles among People Living in Kurdistan Province, Iran

    PubMed Central

    MORADI, Ghobad; MALEKAFZALI ARDAKANI, Hossein; MAJDZADEH, Reza; BIDARPOUR, Farzam; MOHAMMAD, Kazem; HOLAKOUIE-NAIENI, Kourosh

    2014-01-01

    Abstract Background The aim of this study was to determine the socioeconomic inequalities in nonuse of seatbelts in cars and helmets on motorcycles in Kurdistan Province, west of Iran, 2009. Methods The data used in this study was collected from the data gathered in non-communicable disease surveillance system (NCDSS) in 2009 in Kurdistan. A total of 1000 people were included in this study. The outcome variable of this study was the nonuse of seatbelts and helmets. The socio-economic status (SES) was calculated based on participants’ residential area and assets using Principal Component Analysis (PCA) method. The concentration index, concentration curve, and comparison of Odds Ratio (OR) in different SES groups were used to measure the socioeconomic inequalities using logistic regression. In order to determine the contribution of determinants of inequality, decomposition analysis was used. Results The prevalence of nonuse of seatbelts in cars and helmets on motorcycles were 47.5%, 95%CI [44%, 55%], respectively. The Concentration index was -0.097, CI [-0.148, -0.046]. The OR of nonuse of seatbelts in cars and helmets on motorcycles in the richest group compared with the poorest group was 0.39, 95%CI [0.23, 0.68]. The results of the decomposition analysis showed that 34% of inequalities were due to SES, 47% were due to residential area, and 12% were due to unknown factors. Conclusion There is a reverse association between SES and nonuse of seatbelts in cars and helmets on motorcycles. This issue must be considered while planning to reduce traffic accidents injuries. PMID:26175978

  10. How to validate similarity in linear transform models of event-related potentials between experimental conditions?

    PubMed

    Cong, Fengyu; Lin, Qiu-Hua; Astikainen, Piia; Ristaniemi, Tapani

    2014-10-30

    It is well-known that data of event-related potentials (ERPs) conform to the linear transform model (LTM). For group-level ERP data processing using principal/independent component analysis (PCA/ICA), ERP data of different experimental conditions and different participants are often concatenated. It is theoretically assumed that different experimental conditions and different participants possess the same LTM. However, how to validate the assumption has been seldom reported in terms of signal processing methods. When ICA decomposition is globally optimized for ERP data of one stimulus, we gain the ratio between two coefficients mapping a source in brain to two points along the scalp. Based on such a ratio, we defined a relative mapping coefficient (RMC). If RMCs between two conditions for an ERP are not significantly different in practice, mapping coefficients of this ERP between the two conditions are statistically identical. We examined whether the same LTM of ERP data could be applied for two different stimulus types of fearful and happy facial expressions. They were used in an ignore oddball paradigm in adult human participants. We found no significant difference in LTMs (based on ICASSO) of N170 responses to the fearful and the happy faces in terms of RMCs of N170. We found no methods for straightforward comparison. The proposed RMC in light of ICA decomposition is an effective approach for validating the similarity of LTMs of ERPs between experimental conditions. This is very fundamental to apply group-level PCA/ICA to process ERP data. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Socioeconomic inequality in hypertension in Iran.

    PubMed

    Fateh, Mansooreh; Emamian, Mohammad Hassan; Asgari, Fereshteh; Alami, Ali; Fotouhi, Akbar

    2014-09-01

    Hypertension covers a large portion of burden of diseases, especially in the developing countries. The unequal distribution of hypertension in the population may affect 'health for all' goal. This study aimed to investigate the socioeconomic inequality of hypertension in Iran and to identify its influencing factors. We used data from Iran's surveillance system for risk factors of noncommunicable diseases which was conducted on 89 400 individuals aged 15-64 years in 2005. To determine the socioeconomic status of participants, a new variable was created using a principal component analysis. We examined hypertension at different levels of this new variable and calculated slop index of inequality (SII) and concentration index (C) for hypertension. We then applied Oaxaca-Blinder decomposition analysis to determine the causes of inequality. The SII and C for hypertension were -32.3 and -0.170, respectively. The concentration indices varied widely between different provinces in Iran and was lower (more unequal) in women than in men. There was significant socioeconomic inequality in hypertension. The results of decomposition indicated that 40.5% of the low-socioeconomic group (n = 18190) and 16.4% of the high-socioeconomic group (n = 16335) had hypertension. Age, education level, sex and residency location were the main associated factors of the difference among groups. According to our results, there was an inequality in hypertension in Iran, so that individuals with low socioeconomic status had a higher prevalence of hypertension. Age was the most contributed factor in this inequality and women in low-socioeconomic group were the most vulnerable people for hypertension.

  12. The deconvolution of complex spectra by artificial immune system

    NASA Astrophysics Data System (ADS)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  13. A novel hybrid ensemble learning paradigm for tourism forecasting

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-02-01

    In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.

  14. Decomposition Analyses Applied to a Complex Ultradian Biorhythm: The Oscillating NADH Oxidase Activity of Plasma Membranes Having a Potential Time-Keeping (Clock) Function

    PubMed Central

    Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James

    2003-01-01

    Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112

  15. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  16. Recurrence quantity analysis based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bian, Songhan; Shang, Pengjian

    2017-05-01

    Recurrence plot (RP) has turned into a powerful tool in many different sciences in the last three decades. To quantify the complexity and structure of RP, recurrence quantification analysis (RQA) has been developed based on the measures of recurrence density, diagonal lines, vertical lines and horizontal lines. This paper will study the RP based on singular value decomposition which is a new perspective of RP study. Principal singular value proportion (PSVP) will be proposed as one new RQA measure and bigger PSVP means higher complexity for one system. In contrast, smaller PSVP reflects a regular and stable system. Considering the advantage of this method in detecting the complexity and periodicity of systems, several simulation and real data experiments are chosen to examine the performance of this new RQA.

  17. Spectral-decomposition techniques for the identification of periodic and anomalous phenomena in radon time-series.

    NASA Astrophysics Data System (ADS)

    Crockett, R. G. M.; Perrier, F.; Richon, P.

    2009-04-01

    Building on independent investigations by research groups at both IPGP, France, and the University of Northampton, UK, hourly-sampled radon time-series of durations exceeding one year have been investigated for periodic and anomalous phenomena using a variety of established and novel techniques. These time-series have been recorded in locations having no routine human behaviour and thus are effectively free of significant anthropogenic influences. With regard to periodic components, the long durations of these time-series allow, in principle, very high frequency resolutions for established spectral-measurement techniques such as Fourier and maximum-entropy. However, as has been widely observed, the stochastic nature of radon emissions from rocks and soils, coupled with sensitivity to a wide variety influences such as temperature, wind-speed and soil moisture-content has made interpretation of the results obtained by such techniques very difficult, with uncertain results, in many cases. We here report developments in the investigation of radon-time series for periodic and anomalous phenomena using spectral-decomposition techniques. These techniques, in variously separating ‘high', ‘middle' and ‘low' frequency components, effectively ‘de-noise' the data by allowing components of interest to be isolated from others which (might) serve to obscure weaker information-containing components. Once isolated, these components can be investigated using a variety of techniques. Whilst this is very much work in early stages of development, spectral decomposition methods have been used successfully to indicate the presence of diurnal and sub-diurnal cycles in radon concentration which we provisionally attribute to tidal influences. Also, these methods have been used to enhance the identification of short-duration anomalies, attributable to a variety of causes including, for example, earthquakes and rapid large-magnitude changes in weather conditions. Keywords: radon; earthquakes; tidal-influences; anomalies; time series; spectral-decomposition.

  18. Application of empirical mode decomposition in removing fidgeting interference in doppler radar life signs monitoring devices.

    PubMed

    Mostafanezhad, Isar; Boric-Lubecke, Olga; Lubecke, Victor; Mandic, Danilo P

    2009-01-01

    Empirical Mode Decomposition has been shown effective in the analysis of non-stationary and non-linear signals. As an application in wireless life signs monitoring in this paper we use this method in conditioning the signals obtained from the Doppler device. Random physical movements, fidgeting, of the human subject during a measurement can fall on the same frequency of the heart or respiration rate and interfere with the measurement. It will be shown how Empirical Mode Decomposition can break the radar signal down into its components and help separate and remove the fidgeting interference.

  19. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  20. Recognition of units in coarse, unconsolidated braided-stream deposits from geophysical log data with principal components analysis

    USGS Publications Warehouse

    Morin, R.H.

    1997-01-01

    Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.

  1. Analysis and Evaluation of the Characteristic Taste Components in Portobello Mushroom.

    PubMed

    Wang, Jinbin; Li, Wen; Li, Zhengpeng; Wu, Wenhui; Tang, Xueming

    2018-05-10

    To identify the characteristic taste components of the common cultivated mushroom (brown; Portobello), Agaricus bisporus, taste components in the stipe and pileus of Portobello mushroom harvested at different growth stages were extracted and identified, and principal component analysis (PCA) and taste active value (TAV) were used to reveal the characteristic taste components during the each of the growth stages of Portobello mushroom. In the stipe and pileus, 20 and 14 different principal taste components were identified, respectively, and they were considered as the principal taste components of Portobello mushroom fruit bodies, which included most amino acids and 5'-nucleotides. Some taste components that were found at high levels, such as lactic acid and citric acid, were not detected as Portobello mushroom principal taste components through PCA. However, due to their high content, Portobello mushroom could be used as a source of organic acids. The PCA and TAV results revealed that 5'-GMP, glutamic acid, malic acid, alanine, proline, leucine, and aspartic acid were the characteristic taste components of Portobello mushroom fruit bodies. Portobello mushroom was also found to be rich in protein and amino acids, so it might also be useful in the formulation of nutraceuticals and functional food. The results in this article could provide a theoretical basis for understanding and regulating the characteristic flavor components synthesis process of Portobello mushroom. © 2018 Institute of Food Technologists®.

  2. Applications of principal component analysis to breath air absorption spectra profiles classification

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.

    2015-12-01

    The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.

  3. Supercritical Water Process for the Chemical Recycling of Waste Plastics

    NASA Astrophysics Data System (ADS)

    Goto, Motonobu

    2010-11-01

    The development of chemical recycling of waste plastics by decomposition reactions in sub- and supercritical water is reviewed. Decomposition reactions proceed rapidly and selectively using supercritical fluids compared to conventional processes. Condensation polymerization plastics such as PET, nylon, and polyurethane, are relatively easily depolymerized to their monomers in supercritical water. The monomer components are recovered in high yield. Addition polymerization plastics such as phenol resin, epoxy resin, and polyethylene, are also decomposed to monomer components with or without catalysts. Recycling process of fiber reinforced plastics has been studied. Pilot scale or commercial scale plants have been developed and are operating with sub- and supercritical fluids.

  4. The moments of inertia of Mars

    NASA Technical Reports Server (NTRS)

    Bills, Bruce G.

    1989-01-01

    The mean moment of inertia of Mars is, at present, very poorly constrained. The generally accepted value of 0.365 M(R-squared) is obtained by assuming that the observed second degree gravity field can be decomposed into a hydrostatic oblate spheroid and a nonhydrostatic prolate spheroid with an equatorial axis of symmetry. An alternative decomposition is advocated in the present analysis. If the nonhydrostatic component is a maximally triaxial ellipsoid (intermediate moment exactly midway between greatest and least), the hydrostatic component is consistent with a mean moment of 0.345 M(R-squared). The plausibility of this decomposition is supported by statistical arguments and comparison with the earth, moon and Venus.

  5. Analysis of the Sensitivity of K-Type Molecular Sieve-Deposited MWNTs for the Detection of SF6 Decomposition Gases under Partial Discharge

    PubMed Central

    Zhang, Xiaoxing; Li, Xin; Luo, Chenchen; Dong, Xingchen; Zhou, Lei

    2015-01-01

    Sulfur hexafluoride (SF6) is widely utilized in gas-insulated switchgear (GIS). However, part of SF6 decomposes into different components under partial discharge (PD) conditions. Previous research has shown that the gas responses of intrinsic and 4 Å-type molecular sieve-deposited multi-wall carbon nanotubes (MWNTs) to SOF2 and SO2F2, two important decomposition components of SF6, are not obvious. In this study, a K-type molecular sieve-deposited MWNTs sensor was developed. Its gas response characteristics and the influence of the mixture ratios of gases on the gas-sensing properties were studied. The results showed that, for sensors with gas mixture ratios of 5:1, 10:1, and 20:1, the resistance change rate increased by nearly 13.0% after SOF2 adsorption, almost 10 times that of MWNTs sensors, while the sensors’ resistance change rate with a mixture ratio of 10:1 reached 17.3% after SO2F2 adsorption, nearly nine times that of intrinsic MWNT sensors. Besides, a good linear relationship was observed between concentration of decomposition components and the resistance change rate of sensors. PMID:26569245

  6. Analysis of the Sensitivity of K-Type Molecular Sieve-Deposited MWNTs for the Detection of SF₆ Decomposition Gases under Partial Discharge.

    PubMed

    Zhang, Xiaoxing; Li, Xin; Luo, Chenchen; Dong, Xingchen; Zhou, Lei

    2015-11-11

    Sulfur hexafluoride (SF6) is widely utilized in gas-insulated switchgear (GIS). However, part of SF6 decomposes into different components under partial discharge (PD) conditions. Previous research has shown that the gas responses of intrinsic and 4 Å-type molecular sieve-deposited multi-wall carbon nanotubes (MWNTs) to SOF2 and SO2F2, two important decomposition components of SF6, are not obvious. In this study, a K-type molecular sieve-deposited MWNTs sensor was developed. Its gas response characteristics and the influence of the mixture ratios of gases on the gas-sensing properties were studied. The results showed that, for sensors with gas mixture ratios of 5:1, 10:1, and 20:1, the resistance change rate increased by nearly 13.0% after SOF2 adsorption, almost 10 times that of MWNTs sensors, while the sensors' resistance change rate with a mixture ratio of 10:1 reached 17.3% after SO2F2 adsorption, nearly nine times that of intrinsic MWNT sensors. Besides, a good linear relationship was observed between concentration of decomposition components and the resistance change rate of sensors.

  7. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  8. On Using the Average Intercorrelation Among Predictor Variables and Eigenvector Orientation to Choose a Regression Solution.

    ERIC Educational Resources Information Center

    Mugrage, Beverly; And Others

    Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…

  9. A Note on McDonald's Generalization of Principal Components Analysis

    ERIC Educational Resources Information Center

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  10. CLUSFAVOR 5.0: hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles

    PubMed Central

    Peterson, Leif E

    2002-01-01

    CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816

  11. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  12. Kinetics of the cellular decomposition of supersaturated solid solutions

    NASA Astrophysics Data System (ADS)

    Ivanov, M. A.; Naumuk, A. Yu.

    2014-09-01

    A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.

  13. Ultraviolet Curable Resin System for Rapid Runway Repair.

    DTIC Science & Technology

    1983-04-01

    Diaryllodonium Ilt Decomposition upon It Expost - 2 Tiarylsulfonium salts react in the same manner as diaryliodonium salts upon UV exposure. A...diphenyl iodonium he:cafiuoroarsenate, the principal diaryliodonium salt used in this study , is approximately $2,329 per pound based solelyupon the cost of...experiments of this program. This choice was predominantly based on the fact that a study has been made on the effect of sensitizers in enhancing the UV

  14. The Complexity of Human Walking: A Knee Osteoarthritis Study

    PubMed Central

    Kotti, Margarita; Duffell, Lynsey D.; Faisal, Aldo A.; McGregor, Alison H.

    2014-01-01

    This study proposes a framework for deconstructing complex walking patterns to create a simple principal component space before checking whether the projection to this space is suitable for identifying changes from the normality. We focus on knee osteoarthritis, the most common knee joint disease and the second leading cause of disability. Knee osteoarthritis affects over 250 million people worldwide. The motivation for projecting the highly dimensional movements to a lower dimensional and simpler space is our belief that motor behaviour can be understood by identifying a simplicity via projection to a low principal component space, which may reflect upon the underlying mechanism. To study this, we recruited 180 subjects, 47 of which reported that they had knee osteoarthritis. They were asked to walk several times along a walkway equipped with two force plates that capture their ground reaction forces along 3 axes, namely vertical, anterior-posterior, and medio-lateral, at 1000 Hz. Data when the subject does not clearly strike the force plate were excluded, leaving 1–3 gait cycles per subject. To examine the complexity of human walking, we applied dimensionality reduction via Probabilistic Principal Component Analysis. The first principal component explains 34% of the variance in the data, whereas over 80% of the variance is explained by 8 principal components or more. This proves the complexity of the underlying structure of the ground reaction forces. To examine if our musculoskeletal system generates movements that are distinguishable between normal and pathological subjects in a low dimensional principal component space, we applied a Bayes classifier. For the tested cross-validated, subject-independent experimental protocol, the classification accuracy equals 82.62%. Also, a novel complexity measure is proposed, which can be used as an objective index to facilitate clinical decision making. This measure proves that knee osteoarthritis subjects exhibit more variability in the two-dimensional principal component space. PMID:25232949

  15. Evolution of various fractions during the windrow composting of chicken manure with rice chaff.

    PubMed

    Kong, Zhijian; Wang, Xuanqing; Liu, Qiumei; Li, Tuo; Chen, Xing; Chai, Lifang; Liu, Dongyang; Shen, Qirong

    2018-02-01

    Different fractions during the 85-day windrow composting were characterized based on various parameters, such as physiochemical properties and hydrolytic enzyme activities; several technologies were used, including spectral scanning techniques, confocal laser scanning microscopy (CLSM) and 13 C Nuclear Magnetic Resonance Spectroscopy ( 13 C NMR). The evaluated parameters fluctuated strongly during the first 3 weeks which was the most active period of the composting process. The principal components analysis (PCA) results showed that four classes of the samples were clearly distinguishable, in which the physiochemical parameters were similar, and that the dynamics of the composting process was significantly influenced by C/N and moisture content. The 13 C NMR results indicated that O-alkyl-C was the predominant group both in the solid and water-soluble fractions (WSF), and the decomposition of O-alkyl-C mainly occurred during the active stage. In general, the various parameters indicated that windrow composting is a feasible treatment that can be used for the resource reuse of agricultural wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Biologically-inspired data decorrelation for hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.

    2011-12-01

    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  17. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2015-06-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  18. Reconstructing Past Admixture Processes from Local Genomic Ancestry Using Wavelet Transformation

    PubMed Central

    Sanderson, Jean; Sudoyo, Herawati; Karafet, Tatiana M.; Hammer, Michael F.; Cox, Murray P.

    2015-01-01

    Admixture between long-separated populations is a defining feature of the genomes of many species. The mosaic block structure of admixed genomes can provide information about past contact events, including the time and extent of admixture. Here, we describe an improved wavelet-based technique that better characterizes ancestry block structure from observed genomic patterns. principal components analysis is first applied to genomic data to identify the primary population structure, followed by wavelet decomposition to develop a new characterization of local ancestry information along the chromosomes. For testing purposes, this method is applied to human genome-wide genotype data from Indonesia, as well as virtual genetic data generated using genome-scale sequential coalescent simulations under a wide range of admixture scenarios. Time of admixture is inferred using an approximate Bayesian computation framework, providing robust estimates of both admixture times and their associated levels of uncertainty. Crucially, we demonstrate that this revised wavelet approach, which we have released as the R package adwave, provides improved statistical power over existing wavelet-based techniques and can be used to address a broad range of admixture questions. PMID:25852078

  19. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2

  20. MAUD: An Interactive Computer Program for the Structuring, Decomposition, and Recomposition of Preferences between Multiattributed Alternatives. Final Report. Technical Report 543.

    ERIC Educational Resources Information Center

    Humphreys, Patrick; Wisudha, Ayleen

    As a demonstration of the application of heuristic devices to decision-theoretical techniques, an interactive computer program known as MAUD (Multiattribute Utility Decomposition) has been designed to support decision or choice problems that can be decomposed into component factors, or to act as a tool for investigating the microstructure of a…

  1. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; hide

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  2. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    PubMed

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  3. Determination of the thermal stability of perfluoropolyalkyl ethers by tensimetry

    NASA Technical Reports Server (NTRS)

    Helmick, Larry A.; Jones, William R., Jr.

    1992-01-01

    The thermal decomposition temperatures of several perfluoropolyalkyl ether fluids were determined with a computerized tensimeter. In general, the decomposition temperatures of the commercial fluids were all similar and significantly higher than those for noncommercial fluids. Correlation of the decomposition temperatures with the molecular structures of the primary components of the commercial fluids revealed that the stability of the fluids was not affected by carbon chain length, branching, or adjacent difluoroformal groups. Instead, stability was limited by the presence of small quantities of thermally unstable material and/or chlorine-containing material arising from the use of chlorine containing solvents during synthesis. Finally, correlation of decomposition temperatures with molecular weights for two fluids supports a chain cleavage reaction mechanism for one and an unzipping reaction mechanism for the other.

  4. The 'overflow tap' theory: linking GPP to forest soil carbon dynamics and the mycorrhizal component

    NASA Astrophysics Data System (ADS)

    Heinemeyer, Andreas; Willkinson, Matthew; Subke, Jens-Arne; Casella, Eric; Vargas, Rodrigo; Morison, James; Ineson, Phil

    2010-05-01

    Quantifying soil organic carbon (SOC) dynamics accurately is crucial to underpin better predictions of future climate change feedbacks within the atmosphere-vegetation-soil system. Measuring the components of ecosystem carbon fluxes has become a central point of the research focus during the last decade, not least because of the large SOC stocks, potentially vulnerable to climate change. However, our basic understanding of the composition and environmental responses of the soil CO2 efflux is still under debate and limited by the available field methodologies. For example, only recently did we separate successfully root (R), mycorrhizal fungal (F) and soil animal/microbial (H) respiration based on a mesh-bag/collar methodology and described their unique environmental responses. Yet it might be these differences which are crucial for understanding C-cycle feedbacks and observed limitations in plant biomass increase under elevated carbon dioxide (e.g. FACE) studies. It is becoming clear that these flux components and their environmental responses must be incorporated in models that link but also treat the heterotrophic and autotrophic fluxes separately. However, owing to a scarcity of experimental data, separation of fluxes and environmental drivers has been ignored in current models. We are now in a position to parameterize realistic soil C turnover models that include both, decomposition and plant-derived fluxes. Such models will allow (1) a direct comparison of model output to field data for all flux components, (2) include the potential to link plant C allocation to the rhizosphere with increased decomposition activity through soil C priming, and (3) to explore the potential of plant biomass C sequestration limitations under increased C assimilation. These mechanisms are fundamental in describing the stability of future SOC stocks due to elevated temperatures and carbon dioxide, altering SOC decomposition directly and indirectly through changes in plant productivity. The work presented here focuses on three critical areas: (1) We present annual fluxes at hourly intervals for the three soil CO2 efflux components (R, F and H) from a 75 year-old deciduous oak forest in SE England. We investigate the individual environmental responses of the three flux components, and compare them to soil decomposition modelled by CENTURY and its latest version (i.e. DAYCENT), which separately models root-derived respiration in addition to the soil decomposition output. (2) Using estimates of gross primary productivity (GPP) based on eddy covariance measurements from the same site, we explore linkages between GPP and soil respiration component fluxes using basic regression and wavelet analyses. We show a distinctly different time lag signal between GPP and root vs. mycorrhizal fungal respiration. We then discuss how models might need to be improved to accurately predict total soil CO2 efflux, including root-derived respiration. (3) We finally discuss the ‘overflow tap' theory, that during periods of high assimilation (e.g. optimum environmental conditions or elevated CO2) surplus non-structural C is allocated belowground to the mycorrhizal network; this additional C could then be used and released by the associated fungal partners, causing soil priming through stimulating decomposition.

  5. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  6. Pepper seed variety identification based on visible/near-infrared spectral technology

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen

    2016-11-01

    Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.

  7. Analysis of environmental variation in a Great Plains reservoir using principal components analysis and geographic information systems

    USGS Publications Warehouse

    Long, J.M.; Fisher, W.L.

    2006-01-01

    We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.

  8. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  9. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition

    PubMed Central

    Norman-Haignere, Sam

    2015-01-01

    SUMMARY The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech. PMID:26687225

  10. SDE decomposition and A-type stochastic interpretation in nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Yuan, Ruoshi; Tang, Ying; Ao, Ping

    2017-12-01

    An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.

  11. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  12. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  13. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  14. Factors associated with successful transition among children with disabilities in eight European countries

    PubMed Central

    2017-01-01

    Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning. PMID:28636649

  15. Factors associated with successful transition among children with disabilities in eight European countries.

    PubMed

    Ravenscroft, John; Wazny, Kerri; Davis, John M

    2017-01-01

    This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), 'child inclusive ethos,' contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43-7.18, p<0.0001). To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.

  16. Patient phenotypes associated with outcomes after aneurysmal subarachnoid hemorrhage: a principal component analysis.

    PubMed

    Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch

    2014-03-01

    Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.

  17. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  18. Age-associated patterns in gray matter volume, cerebral perfusion and BOLD oscillations in children and adolescents.

    PubMed

    Bray, Signe

    2017-05-01

    Healthy brain development involves changes in brain structure and function that are believed to support cognitive maturation. However, understanding how structural changes such as grey matter thinning relate to functional changes is challenging. To gain insight into structure-function relationships in development, the present study took a data driven approach to define age-related patterns of variation in gray matter volume (GMV), cerebral blood flow (CBF) and blood-oxygen level dependent (BOLD) signal variation (fractional amplitude of low-frequency fluctuations; fALFF) in 59 healthy children aged 7-18 years, and examined relationships between modalities. Principal components analysis (PCA) was applied to each modality in parallel, and participant scores for the top components were assessed for age associations. We found that decompositions of CBF, GMV and fALFF all included components for which scores were significantly associated with age. The dominant patterns in GMV and CBF showed significant (GMV) or trend level (CBF) associations with age and a strong spatial overlap, driven by increased signal intensity in default mode network (DMN) regions. GMV, CBF and fALFF additionally showed components accounting for 3-5% of variability with significant age associations. However, these patterns were relatively spatially independent, with small-to-moderate overlap between modalities. Independence of age effects was further demonstrated by correlating individual subject maps between modalities: CBF was significantly less correlated with GMV and fALFF in older children relative to younger. These spatially independent effects of age suggest that the parallel decline observed in global GMV and CBF may not reflect spatially synchronized processes. Hum Brain Mapp 38:2398-2407, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Extracting the potential-well of a near-field optical trap using the Helmholtz-Hodge decomposition

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Padhy, Punnag; Hansen, Paul C.; Hesselink, Lambertus

    2018-02-01

    The non-conservative nature of the force field generated by a near-field optical trap is analyzed. A plasmonic C-shaped engraving on a gold film is considered as the trap. The force field is calculated using the Maxwell stress tensor method. The Helmholtz-Hodge decomposition is used to extract the conservative and the non-conservative component of the force. Due to the non-negligible non-conservative component, it is found that the conventional approach of extracting the potential by direct integration of the force is not accurate. Despite the non-conservative nature of the force field, it is found that the statistical properties of a trapped nanoparticle can be estimated from the conservative component of the force field alone. Experimental and numerical results are presented to support the claims.

  20. Applications of a Novel Clustering Approach Using Non-Negative Matrix Factorization to Environmental Research in Public Health

    PubMed Central

    Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S. Stanley

    2016-01-01

    Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability. PMID:27213413

  1. Applications of a Novel Clustering Approach Using Non-Negative Matrix Factorization to Environmental Research in Public Health.

    PubMed

    Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S Stanley

    2016-05-18

    Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability.

  2. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images

    PubMed Central

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  3. Use of seasonal trend decomposition to understand groundwater behaviour in the Permo-Triassic Sandstone aquifer, Eden Valley, UK

    NASA Astrophysics Data System (ADS)

    Lafare, Antoine E. A.; Peach, Denis W.; Hughes, Andrew G.

    2016-02-01

    The daily groundwater level (GWL) response in the Permo-Triassic Sandstone aquifers in the Eden Valley, England (UK), has been studied using the seasonal trend decomposition by LOESS (STL) technique. The hydrographs from 18 boreholes in the Permo-Triassic Sandstone were decomposed into three components: seasonality, general trend and remainder. The decomposition was analysed first visually, then using tools involving a variance ratio, time-series hierarchical clustering and correlation analysis. Differences and similarities in decomposition pattern were explained using the physical and hydrogeological information associated with each borehole. The Penrith Sandstone exhibits vertical and horizontal heterogeneity, whereas the more homogeneous St Bees Sandstone groundwater hydrographs characterize a well-identified seasonality; however, exceptions can be identified. A stronger trend component is obtained in the silicified parts of the northern Penrith Sandstone, while the southern Penrith, containing Brockram (breccias) Formation, shows a greater relative variability of the seasonal component. Other boreholes drilled as shallow/deep pairs show differences in responses, revealing the potential vertical heterogeneities within the Penrith Sandstone. The differences in bedrock characteristics between and within the Penrith and St Bees Sandstone formations appear to influence the GWL response. The de-seasonalized and de-trended GWL time series were then used to characterize the response, for example in terms of memory effect (autocorrelation analysis). By applying the STL method, it is possible to analyse GWL hydrographs leading to better conceptual understanding of the groundwater flow. Thus, variation in groundwater response can be used to gain insight into the aquifer physical properties and understand differences in groundwater behaviour.

  4. Introduction to uses and interpretation of principal component analyses in forest biology.

    Treesearch

    J. G. Isebrands; Thomas R. Crow

    1975-01-01

    The application of principal component analysis for interpretation of multivariate data sets is reviewed with emphasis on (1) reduction of the number of variables, (2) ordination of variables, and (3) applications in conjunction with multiple regression.

  5. Principal component analysis of phenolic acid spectra

    USDA-ARS?s Scientific Manuscript database

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  6. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  7. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    NASA Astrophysics Data System (ADS)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  8. Relation between SM-covers and SM-decompositions of Petri nets

    NASA Astrophysics Data System (ADS)

    Karatkevich, Andrei; Wiśniewski, Remigiusz

    2015-12-01

    A task of finding for a given Petri net a set of sequential components being able to represent together the behavior of the net arises often in formal analysis of Petri nets and in applications of Petri net to logical control. Such task can be met in two different variants: obtaining a Petri net cover or a decomposition. Petri net cover supposes that a set of the subnets of given net is selected, and the sequential nets forming a decomposition may have additional places, which do not belong to the decomposed net. The paper discusses difference and relations between two mentioned tasks and their results.

  9. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  10. Office of Naval Research Aggregate Dynamics in the Sea Workshop Held at Pacific Grove, California on September 22-24, 1986

    DTIC Science & Technology

    1986-09-01

    collision, etc.) originate from largely biogenically derived component particles. Local loss terms include sinking, advection and decomposition which...Some quarry or scrape away the aggregate surface, others consume entire particles. Bacterial decomposition on the particle surfaces may also weaken...major role in the degradation of aggregates. Only limited information is available regarding microbial colonization, hydrolysis , and metabolism of the

  11. Galaxy Zoo: secular evolution of barred galaxies from structural decomposition of multiband images

    NASA Astrophysics Data System (ADS)

    Kruk, Sandor J.; Lintott, Chris J.; Bamford, Steven P.; Masters, Karen L.; Simmons, Brooke D.; Häußler, Boris; Cardamone, Carolin N.; Hart, Ross E.; Kelvin, Lee; Schawinski, Kevin; Smethurst, Rebecca J.; Vika, Marina

    2018-02-01

    We present the results of two-component (disc+bar) and three-component (disc+bar+bulge) multiwavelength 2D photometric decompositions of barred galaxies in five Sloan Digital Sky Survey (SDSS) bands (ugriz). This sample of ∼3500 nearby (z < 0.06) galaxies with strong bars selected from the Galaxy Zoo citizen science project is the largest sample of barred galaxies to be studied using photometric decompositions that include a bar component. With detailed structural analysis, we obtain physical quantities such as the bar- and bulge-to-total luminosity ratios, effective radii, Sérsic indices and colours of the individual components. We observe a clear difference in the colours of the components, the discs being bluer than the bars and bulges. An overwhelming fraction of bulge components have Sérsic indices consistent with being pseudo-bulges. By comparing the barred galaxies with a mass-matched and volume-limited sample of unbarred galaxies, we examine the connection between the presence of a large-scale galactic bar and the properties of discs and bulges. We find that the discs of unbarred galaxies are significantly bluer compared to the discs of barred galaxies, while there is no significant difference in the colours of the bulges. We find possible evidence of secular evolution via bars that leads to the build-up of pseudo-bulges and to the quenching of star formation in the discs. We identify a subsample of unbarred galaxies with an inner lens/oval and find that their properties are similar to barred galaxies, consistent with an evolutionary scenario in which bars dissolve into lenses. This scenario deserves further investigation through both theoretical and observational work.

  12. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  13. Approximate analytical solutions in the analysis of elastic structures of complex geometry

    NASA Astrophysics Data System (ADS)

    Goloskokov, Dmitriy P.; Matrosov, Alexander V.

    2018-05-01

    A method of analytical decomposition for analysis plane structures of a complex configuration is presented. For each part of the structure in the form of a rectangle all the components of the stress-strain state are constructed by the superposition method. The method is based on two solutions derived in the form of trigonometric series with unknown coefficients using the method of initial functions. The coefficients are determined from the system of linear algebraic equations obtained while satisfying the boundary conditions and the conditions for joining the structure parts. The components of the stress-strain state of a bent plate with holes are calculated using the analytical decomposition method.

  14. Assessment of Supportive, Conflicted, and Controlling Dimensions of Family Functioning: A Principal Components Analysis of Family Environment Scale Subscales in a College Sample.

    ERIC Educational Resources Information Center

    Kronenberger, William G.; Thompson, Robert J., Jr.; Morrow, Catherine

    1997-01-01

    A principal components analysis of the Family Environment Scale (FES) (R. Moos and B. Moos, 1994) was performed using 113 undergraduates. Research supported 3 broad components encompassing the 10 FES subscales. These results supported previous research and the generalization of the FES to college samples. (SLD)

  15. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  17. Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...

  18. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  19. 10Be in late deglacial climate simulated by ECHAM5-HAM - Part 2: Isolating the solar signal from 10Be deposition

    NASA Astrophysics Data System (ADS)

    Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.

    2014-04-01

    This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 year time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea-surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather- or climate-driven noise in the 10Be deposition flux during different stages of climate. The production signal varies at lower frequencies, dominated by the 11 year solar cycle within the 30 year timescale of these experiments. The climatic noise is of higher frequencies than 11 years during the 30 year period studied. We first apply empirical orthogonal function (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis to the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low-frequency components and the long-term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high-frequency components represent climate-driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.

  20. [Edge effects of forest gap in Pinus massoniana plantations on the decomposition of leaf litter recalcitrant components of Cinnamomum camphora and Toona ciliata.

    PubMed

    Zhang, Yan; Zhang, Dan Ju; Li, Xun; Liu, Hua; Zhang, Ming Jin; Yang, Wan Qin; Zhang, Jian

    2016-04-22

    The objective of the study was to evaluate the dynamics of recalcitrant components during foliar litter decomposition under edge effects of forest gap in Pinus massoniana plantations in the low hilly land, Sichuan basin. A field litterbag experiment was conducted in seven forest gaps with different sizes (100, 225, 400, 625, 900, 1225, 1600 m 2 ) which were generated by thinning P. massoniana plantations. The degradation rate of four recalcitrant components, i.e., condensed tannins, total phenol, lignin and cellulose in foliar litter of two native species (Cinnamomum camphora and Toona ciliata) at the gap edge and under the closed canopy were measured. The results showed that the degradation rate of recalcitrant components in T. ciliata litter except for cellulose at the gap edge were significantly higher than that under the closed canopy. For C. camphora litter, only the degradation of lignin at the gap edge was higher than that under the closed canopy. After one-year decomposition, four recalcitrant components in two types of foliar litter exhibited an increment of degradation rate, and the degradation rate of condensed tannin was the fastest, followed by total phenol and cellulose, but the lignin degradation rate was the slowest. With the increase of gap size, except for cellulose, the degradation rate ofthe other three recalcitrant components of the T. ciliata at the edge of medium sized gaps (400 and 625 m 2 ) were significantly higher than at the edge of other gaps. However, lignin in the C. camphora litter at the 625 m 2 gap edge showed the greatest degradation rate. Both temperature and litter initial content were significantly correlated with litter recalcitrant component degradation. Our results suggested that medium sized gaps (400-625 m 2 ) had a more significant edge effect on the degradation of litter recalcitrant components in the two native species in P. massoniana plantations, however, the effect also depended on species.

  1. Vorticity and helicity decompositions and dynamics with real Schur form of the velocity gradient

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Zhou

    2018-03-01

    The real Schur form (RSF) of a generic velocity gradient field ∇u is exploited to expose the structures of flows, in particular, our field decomposition resulting in two vorticities with only mutual linkage as the topological content of the global helicity (accordingly decomposed into two equal parts). The local transformation to the RSF may indicate alternative (co)rotating frame(s) for specifying the objective argument(s) of the constitutive equation. When ∇u is uniformly of RSF in a fixed Cartesian coordinate frame, i.e., ux = ux(x, y) and uy = uy(x, y), but uz = uz(x, y, z), the model, with the decomposed vorticities both frozen-in to u, is for two-component-two-dimensional-coupled-with-one-component-three-dimensional flows in between two-dimensional-three-component (2D3C) and fully three-dimensional-three-component ones and may help curing the pathology in the helical 2D3C absolute equilibrium, making the latter effectively work in more realistic situations.

  2. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    PubMed

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  3. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  4. Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.

    PubMed

    Nguyen, Phuong H

    2006-12-01

    Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.

  5. Spectroscopic and Chemometric Analysis of Binary and Ternary Edible Oil Mixtures: Qualitative and Quantitative Study.

    PubMed

    Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica

    2016-04-19

    The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.

  6. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    NASA Astrophysics Data System (ADS)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  7. Short communication: Discrimination between retail bovine milks with different fat contents using chemometrics and fatty acid profiling.

    PubMed

    Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar

    2017-06-01

    We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Use of multivariate statistics to identify unreliable data obtained using CASA.

    PubMed

    Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón

    2013-06-01

    In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.

  9. [Spatial distribution characteristics of the physical and chemical properties of water in the Kunes River after the supply of snowmelt during spring].

    PubMed

    Liu, Xiang; Guo, Ling-Peng; Zhang, Fei-Yun; Ma, Jie; Mu, Shu-Yong; Zhao, Xin; Li, Lan-Hai

    2015-02-01

    Eight physical and chemical indicators related to water quality were monitored from nineteen sampling sites along the Kunes River at the end of snowmelt season in spring. To investigate the spatial distribution characteristics of water physical and chemical properties, cluster analysis (CA), discriminant analysis (DA) and principal component analysis (PCA) are employed. The result of cluster analysis showed that the Kunes River could be divided into three reaches according to the similarities of water physical and chemical properties among sampling sites, representing the upstream, midstream and downstream of the river, respectively; The result of discriminant analysis demonstrated that the reliability of such a classification was high, and DO, Cl- and BOD5 were the significant indexes leading to this classification; Three principal components were extracted on the basis of the principal component analysis, in which accumulative variance contribution could reach 86.90%. The result of principal component analysis also indicated that water physical and chemical properties were mostly affected by EC, ORP, NO3(-) -N, NH4(+) -N, Cl- and BOD5. The sorted results of principal component scores in each sampling sites showed that the water quality was mainly influenced by DO in upstream, by pH in midstream, and by the rest of indicators in downstream. The order of comprehensive scores for principal components revealed that the water quality degraded from the upstream to downstream, i.e., the upstream had the best water quality, followed by the midstream, while the water quality at downstream was the worst. This result corresponded exactly to the three reaches classified using cluster analysis. Anthropogenic activity and the accumulation of pollutants along the river were probably the main reasons leading to this spatial difference.

  10. Evidence for age-associated disinhibition of the wake drive provided by scoring principal components of the resting EEG spectrum in sleep-provoking conditions.

    PubMed

    Putilov, Arcady A; Donskaya, Olga G

    2016-01-01

    Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.

  11. Analysis of Microbial Community Composition and Methane Production From Northern Peatlands Across a Climate Gradient

    NASA Astrophysics Data System (ADS)

    Sarno, A. F.; Humphreys, E.; Olefeldt, D.; Heffernan, L.; Roman, T. D.; Sebestyen, S.; Kolka, R.; Yavitt, J. B.; Finn, D.; Cadillo-Quiroz, H.

    2017-12-01

    Northern peatland ecosystems allow for the accumulation of a carbon (C) pool as the rate of photosynthesis exceeds the rate of organic carbon decomposition. Under current climate conditions, many northern peatlands act as a C sink; however, changes in climate and other environmental conditions, such as soil permafrost melting, are capable of changing the decomposition cascade. Here we take advantage of four peatlands situated along a climate gradient from tundra (Daring Lake, Canada) to boreal forest (Lutose, Canada) to temperate broadleaf and mixed forest (Bog Lake, MN and Chicago Bog, NY) biomes to assess how the relative abundance of microbial functional groups and substrate availability within the microbial community might impact the decomposition of soil organic matter to methane. The four peatlands had similar hydrology and geochemistry and were poor fen types. Soil, water and gas samples were collected at the water table level. Microbial community composition, derived from Illumina amplicon sequencing of the 16S rRNA gene, and geochemical and climate variables were analyzed with principal component regression analysis to determine major drivers of community variation. Mean annual temperature (r2=0.53), mean annual precipitation (r2=0.36), water table level (r2=0.43) and soil temperature (r2=0.49), were all statistically significant drivers of both general microbial and methanogen community composition (p value < 0.001). The relative abundance of Methanocella, Methanosarcina and Methanobacterium varied significantly across the climate gradient (p value < 0.05), however the majority of methanogen genera did not. Interestingly, dissolved methane (r2=0.24) was statistically significant at the general community level (p value < 0.001), but not significant when tested against only the methanogen community. The results demonstrate that environmental factors predicted to change over time due to climate change will have a significant impact on microbial community composition and C sinks within Northern peatlands. Further analyses of microbial processes that produce methanogenic substrates such as fermentation and syntrophic reactions, in tandem with the further identification and quantification of methanogens, will elucidate other drivers of methane production in Northern peatlands.

  12. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    NASA Astrophysics Data System (ADS)

    Williams, B. J.; Zhang, Y.; Zuo, X.; Martinez, R. E.; Walker, M. J.; Kreisberg, N. M.; Goldstein, A. H.; Docherty, K. S.; Jimenez, J. L.

    2015-12-01

    Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality, and often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a GC column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer (MS). Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, re-desorptions of the CTD cell following ambient sample analysis shows some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.

  13. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    NASA Astrophysics Data System (ADS)

    Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; Martinez, Raul E.; Walker, Michael J.; Kreisberg, Nathan M.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-04-01

    Atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completion of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO+ (m/z 30), NO2+ (m/z 46), SO+ (m/z 48), and SO2+ (m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO2+ (m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.

  14. Organic and inorganic decomposition products from the thermal desorption of atmospheric particles

    DOE PAGES

    Williams, Brent J.; Zhang, Yaping; Zuo, Xiaochen; ...

    2016-04-11

    Here, atmospheric aerosol composition is often analyzed using thermal desorption techniques to evaporate samples and deliver organic or inorganic molecules to various designs of detectors for identification and quantification. The organic aerosol (OA) fraction is composed of thousands of individual compounds, some with nitrogen- and sulfur-containing functionality and, often contains oligomeric material, much of which may be susceptible to decomposition upon heating. Here we analyze thermal decomposition products as measured by a thermal desorption aerosol gas chromatograph (TAG) capable of separating thermal decomposition products from thermally stable molecules. The TAG impacts particles onto a collection and thermal desorption (CTD) cell, and upon completionmore » of sample collection, heats and transfers the sample in a helium flow up to 310 °C. Desorbed molecules are refocused at the head of a gas chromatography column that is held at 45 °C and any volatile decomposition products pass directly through the column and into an electron impact quadrupole mass spectrometer. Analysis of the sample introduction (thermal decomposition) period reveals contributions of NO + ( m/z 30), NO 2 + ( m/z 46), SO + ( m/z 48), and SO 2 + ( m/z 64), derived from either inorganic or organic particle-phase nitrate and sulfate. CO 2 + ( m/z 44) makes up a major component of the decomposition signal, along with smaller contributions from other organic components that vary with the type of aerosol contributing to the signal (e.g., m/z 53, 82 observed here for isoprene-derived secondary OA). All of these ions are important for ambient aerosol analyzed with the aerosol mass spectrometer (AMS), suggesting similarity of the thermal desorption processes in both instruments. Ambient observations of these decomposition products compared to organic, nitrate, and sulfate mass concentrations measured by an AMS reveal good correlation, with improved correlations for OA when compared to the AMS oxygenated OA (OOA) component. TAG signal found in the traditional compound elution time period reveals higher correlations with AMS hydrocarbon-like OA (HOA) combined with the fraction of OOA that is less oxygenated. Potential to quantify nitrate and sulfate aerosol mass concentrations using the TAG system is explored through analysis of ammonium sulfate and ammonium nitrate standards. While chemical standards display a linear response in the TAG system, redesorptions of the CTD cell following ambient sample analysis show some signal carryover on sulfate and organics, and new desorption methods should be developed to improve throughput. Future standards should be composed of complex organic/inorganic mixtures, similar to what is found in the atmosphere, and perhaps will more accurately account for any aerosol mixture effects on compositional quantification.« less

  15. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  16. Studies on the thermal breakdown of common Li-ion battery electrolyte components

    DOE PAGES

    Lamb, Joshua; Orendorff, Christopher J.; Roth, Emanuel Peter; ...

    2015-08-06

    While much attention is paid to the impact of the active materials on the catastrophic failure of lithium ion batteries, much of the severity of a battery failure is also governed by the electrolytes used, which are typically flammable themselves and can decompose during battery failure. The use of LiPF 6 salt can be problematic as well, not only catalyzing electrolyte decomposition, but also providing a mechanism for HF production. This work evaluates the safety performance of the common components ethylene carbonate (EC), diethyl carbonate (DEC), dimethyl carbonate (DMC), and ethyl methyl carbonate (EMC) in the context of the gassesmore » produced during thermal decomposition, looking at both the quantity and composition of the vapor produced. EC and DEC were found to be the largest contributors to gas production, both producing upwards of 1.5 moles of gas/mole of electrolyte. DMC was found to be relatively stable, producing very little gas regardless of the presence of LiPF 6. EMC was stable on its own, but the addition of LiPF 6 catalyzed decomposition of the solvent. As a result, while gas analysis did not show evidence of significant quantities of any acutely toxic materials, the gasses themselves all contained enough flammable components to potentially ignite in air.« less

  17. An innovative approach for characteristic analysis and state-of-health diagnosis for a Li-ion cell based on the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Cho, B. H.

    2014-08-01

    This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.

  18. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  19. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    NASA Astrophysics Data System (ADS)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.

  20. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  1. Rosacea assessment by erythema index and principal component analysis segmentation maps

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Rubins, Uldis; Saknite, Inga; Spigulis, Janis

    2017-12-01

    RGB images of rosacea were analyzed using segmentation maps of principal component analysis (PCA) and erythema index (EI). Areas of segmented clusters were compared to Clinician's Erythema Assessment (CEA) values given by two dermatologists. The results show that visible blood vessels are segmented more precisely on maps of the erythema index and the third principal component (PC3). In many cases, a distribution of clusters on EI and PC3 maps are very similar. Mean values of clusters' areas on these maps show a decrease of the area of blood vessels and erythema and an increase of lighter skin area after the therapy for the patients with diagnosis CEA = 2 on the first visit and CEA=1 on the second visit. This study shows that EI and PC3 maps are more useful than the maps of the first (PC1) and second (PC2) principal components for indicating vascular structures and erythema on the skin of rosacea patients and therapy monitoring.

  2. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  3. [Content of mineral elements of Gastrodia elata by principal components analysis].

    PubMed

    Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei

    2015-03-01

    To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.

  4. Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI

    PubMed Central

    Pearson, William G.; Zumwalt, Ann C.

    2013-01-01

    Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608

  5. Application of Spectral Analysis Techniques in the Intercomparison of Aerosol Data. Part II: Using Maximum Covariance Analysis to Effectively Compare Spatiotemporal Variability of Satellite and AERONET Measured Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-01-01

    Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-angle Imaging Spectroradiomater (MISR) provide regular aerosol observations with global coverage. It is essential to examine the coherency between space- and ground-measured aerosol parameters in representing aerosol spatial and temporal variability, especially in the climate forcing and model validation context. In this paper, we introduce Maximum Covariance Analysis (MCA), also known as Singular Value Decomposition analysis as an effective way to compare correlated aerosol spatial and temporal patterns between satellite measurements and AERONET data. This technique not only successfully extracts the variability of major aerosol regimes but also allows the simultaneous examination of the aerosol variability both spatially and temporally. More importantly, it well accommodates the sparsely distributed AERONET data, for which other spectral decomposition methods, such as Principal Component Analysis, do not yield satisfactory results. The comparison shows overall good agreement between MODIS/MISR and AERONET AOD variability. The correlations between the first three modes of MCA results for both MODIS/AERONET and MISR/ AERONET are above 0.8 for the full data set and above 0.75 for the AOD anomaly data. The correlations between MODIS and MISR modes are also quite high (greater than 0.9). We also examine the extent of spatial agreement between satellite and AERONET AOD data at the selected stations. Some sites with disagreements in the MCA results, such as Kanpur, also have low spatial coherency. This should be associated partly with high AOD spatial variability and partly with uncertainties in satellite retrievals due to the seasonally varying aerosol types and surface properties.

  6. Understanding determinants of socioeconomic inequality in mental health in Iran's capital, Tehran: a concentration index decomposition approach.

    PubMed

    Morasae, Esmaeil Khedmati; Forouzan, Ameneh Setareh; Majdzadeh, Reza; Asadi-Lari, Mohsen; Noorbala, Ahmad Ali; Hosseinpoor, Ahmad Reza

    2012-03-26

    Mental health is of special importance regarding socioeconomic inequalities in health. On the one hand, mental health status mediates the relationship between economic inequality and health; on the other hand, mental health as an "end state" is affected by social factors and socioeconomic inequality. In spite of this, in examining socioeconomic inequalities in health, mental health has attracted less attention than physical health. As a first attempt in Iran, the objectives of this paper were to measure socioeconomic inequality in mental health, and then to untangle and quantify the contributions of potential determinants of mental health to the measured socioeconomic inequality. In a cross-sectional observational study, mental health data were taken from an Urban Health Equity Assessment and Response Tool (Urban HEART) survey, conducted on 22 300 Tehran households in 2007 and covering people aged 15 and above. Principal component analysis was used to measure the economic status of households. As a measure of socioeconomic inequality, a concentration index of mental health was applied and decomposed into its determinants. The overall concentration index of mental health in Tehran was -0.0673 (95% CI = -0.070 - -0.057). Decomposition of the concentration index revealed that economic status made the largest contribution (44.7%) to socioeconomic inequality in mental health. Educational status (13.4%), age group (13.1%), district of residence (12.5%) and employment status (6.5%) also proved further important contributors to the inequality. Socioeconomic inequalities exist in mental health status in Iran's capital, Tehran. Since the root of this avoidable inequality is in sectors outside the health system, a holistic mental health policy approach which includes social and economic determinants should be adopted to redress the inequitable distribution of mental health.

  7. Understanding determinants of socioeconomic inequality in mental health in Iran's capital, Tehran: a concentration index decomposition approach

    PubMed Central

    2012-01-01

    Background Mental health is of special importance regarding socioeconomic inequalities in health. On the one hand, mental health status mediates the relationship between economic inequality and health; on the other hand, mental health as an "end state" is affected by social factors and socioeconomic inequality. In spite of this, in examining socioeconomic inequalities in health, mental health has attracted less attention than physical health. As a first attempt in Iran, the objectives of this paper were to measure socioeconomic inequality in mental health, and then to untangle and quantify the contributions of potential determinants of mental health to the measured socioeconomic inequality. Methods In a cross-sectional observational study, mental health data were taken from an Urban Health Equity Assessment and Response Tool (Urban HEART) survey, conducted on 22 300 Tehran households in 2007 and covering people aged 15 and above. Principal component analysis was used to measure the economic status of households. As a measure of socioeconomic inequality, a concentration index of mental health was applied and decomposed into its determinants. Results The overall concentration index of mental health in Tehran was -0.0673 (95% CI = -0.070 - -0.057). Decomposition of the concentration index revealed that economic status made the largest contribution (44.7%) to socioeconomic inequality in mental health. Educational status (13.4%), age group (13.1%), district of residence (12.5%) and employment status (6.5%) also proved further important contributors to the inequality. Conclusions Socioeconomic inequalities exist in mental health status in Iran's capital, Tehran. Since the root of this avoidable inequality is in sectors outside the health system, a holistic mental health policy approach which includes social and economic determinants should be adopted to redress the inequitable distribution of mental health. PMID:22449237

  8. Is the status of diabetes socioeconomic inequality changing in Kurdistan province, west of Iran? A comparison of two surveys

    PubMed Central

    Moradi, Ghobad; Majdzadeh, Reza; Mohammad, Kazem; Malekafzali, Hossein; Jafari, Saeede; Holakouie-Naieni, Kourosh

    2016-01-01

    Background: About 80% of deaths in 350 million cases of diabetes in the world occur in low and middle income countries. The aim of this study was to determine the status of diabetes socioeconomic inequality and the share of determinants of inequalities in Kurdistan Province, West of Iran, using two surveys in 2005 and 2009. Methods: Data were collected from non-communicable disease surveillance surveys in Kurdistan in 2005 and 2009. In this study, the socioeconomic status (SES) of the participants was determined based on the residential area and assets using principal component analysis statistical method. We used concentration index and logistic regression to determine inequality. Decomposition analysis was used to determine the share of each determinant of inequality. Results: The prevalence of diabetes expressed by individuals changed from 0.9% (95% CI: 0.6-1.3) in 2005 to 3.1% (95% CI: 2-4) in 2009. Diabetes Concentration Index changed from -0.163 (95% CI: -0.301- -0.024) in 2005 to 0.273 (95% CI: 0.101-0.445) in 2009. The results of decomposition analysis revealed that in 2009, 67% of the inequality was due to low socioeconomic status and 16% to area of residence; i.e., living in rural areas. Conclusion: The prevalence of diabetes significantly increased, and the diabetes inequality shifted from the poor people to groups with better SES. Increased prevalence of diabetes among the high SES individuals may be due to their better responses to diabetes control and awareness programs or due to the type of services they were provided during these years. PMID:27493919

  9. Elucidating effects of atmospheric deposition and peat decomposition processes on mercury accumulation rates in a northern Minnesota peatland over last 10,000 cal years

    NASA Astrophysics Data System (ADS)

    Nater, E. A.; Furman, O.; Toner, B. M.; Sebestyen, S. D.; Tfaily, M. M.; Chanton, J.; Fissore, C.; McFarlane, K. J.; Hanson, P. J.; Iversen, C. M.; Kolka, R. K.

    2014-12-01

    Climate change has the potential to affect mercury (Hg), sulfur (S) and carbon (C) stores and cycling in northern peatland ecosystems (NPEs). SPRUCE (Spruce and Peatland Responses Under Climate and Environmental change) is an interdisciplinary study of the effects of elevated temperature and CO2 enrichment on NPEs. Peat cores (0-3.0 m) were collected from 16 large plots located on the S1 peatland (an ombrotrophic bog treed with Picea mariana and Larix laricina) in August, 2012 for baseline characterization before the experiment begins. Peat samples were analyzed at depth increments for total Hg, bulk density, humification indices, and elemental composition. Net Hg accumulation rates over the last 10,000 years were derived from Hg concentrations and peat accumulation rates based on peat depth chronology established using 14C and 13C dating of peat cores. Historic Hg deposition rates are being modeled from pre-industrial deposition rates in S1 scaled by regional lake sediment records. Effects of peatland processes and factors (hydrology, decomposition, redox chemistry, vegetative changes, microtopography) on the biogeochemistry of Hg, S, and other elements are being assessed by comparing observed elemental depth profiles with accumulation profiles predicted solely from atmospheric deposition. We are using principal component analyses and cluster analyses to elucidate relationships between humification indices, peat physical properties, and inorganic and organic geochemistry data to interpret the main processes controlling net Hg accumulation and elemental concentrations in surface and subsurface peat layers. These findings are critical to predicting how climate change will affect future accumulation of Hg as well as existing Hg stores in NPE, and for providing reference baselines for SPRUCE future investigations.

  10. Obesity, metabolic syndrome, impaired fasting glucose, and microvascular dysfunction: a principal component analysis approach.

    PubMed

    Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G

    2012-11-13

    We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.

  11. The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.

    PubMed

    Bagley, C

    1980-03-01

    The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.

  12. The Derivation of Job Compensation Index Values from the Position Analysis Questionnaire (PAQ). Report No. 6.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.; And Others

    The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…

  13. Elastic and acoustic wavefield decompositions and application to reverse time migrations

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong

    P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.

  14. Perceptions of the Principal Evaluation Process and Performance Criteria: A Qualitative Study of the Challenge of Principal Evaluation

    ERIC Educational Resources Information Center

    Faginski-Stark, Erica; Casavant, Christopher; Collins, William; McCandless, Jason; Tencza, Marilyn

    2012-01-01

    Recent federal and state mandates have tasked school systems to move beyond principal evaluation as a bureaucratic function and to re-imagine it as a critical component to improve principal performance and compel school renewal. This qualitative study investigated the district leaders' and principals' perceptions of the performance evaluation…

  15. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  16. 2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.

    PubMed

    Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen

    2017-09-19

    A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.

  17. Fungal colonization and decomposition of leaves and stems of Salix arctica on deglaciated moraines in high-Arctic Canada

    NASA Astrophysics Data System (ADS)

    Osono, Takashi; Matsuoka, Shunsuke; Hirose, Dai; Uchida, Masaki; Kanda, Hiroshi

    2014-06-01

    Fungal colonization, succession, and decomposition of leaves and stems of Salix arctica were studied to estimate the roles of fungi in the decomposition processes in the high Arctic. The samples were collected from five moraines with different periods of development since deglaciation to investigate the effects of ecosystem development on the decomposition processes during the primary succession. The total hyphal lengths and the length of darkly pigmented hyphae increased during decomposition of leaves and stems and were not varied with the moraines. Four fungal morphotaxa were frequently isolated from both leaves and stems. The frequencies of occurrence of two morphotaxa varied with the decay class of leaves and/or stems. The hyphal lengths and the frequencies of occurrence of fungal morphotaxa were positively or negatively correlated with the contents of organic chemical components and nutrients in leaves and stems, suggesting the roles of fungi in chemical changes in the field. Pure culture decomposition tests demonstrated that the fungal morphotaxa were cellulose decomposers. Our results suggest that fungi took part in the chemical changes in decomposing leaves and stems even under the harsh environment of the high Arctic.

  18. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  19. ECG-derived respiration based on iterated Hilbert transform and Hilbert vibration decomposition.

    PubMed

    Sharma, Hemant; Sharma, K K

    2018-06-01

    Monitoring of the respiration using the electrocardiogram (ECG) is desirable for the simultaneous study of cardiac activities and the respiration in the aspects of comfort, mobility, and cost of the healthcare system. This paper proposes a new approach for deriving the respiration from single-lead ECG based on the iterated Hilbert transform (IHT) and the Hilbert vibration decomposition (HVD). The ECG signal is first decomposed into the multicomponent sinusoidal signals using the IHT technique. Afterward, the lower order amplitude components obtained from the IHT are filtered using the HVD to extract the respiration information. Experiments are performed on the Fantasia and Apnea-ECG datasets. The performance of the proposed ECG-derived respiration (EDR) approach is compared with the existing techniques including the principal component analysis (PCA), R-peak amplitudes (RPA), respiratory sinus arrhythmia (RSA), slopes of the QRS complex, and R-wave angle. The proposed technique showed the higher median values of correlation (first and third quartile) for both the Fantasia and Apnea-ECG datasets as 0.699 (0.55, 0.82) and 0.57 (0.40, 0.73), respectively. Also, the proposed algorithm provided the lowest values of the mean absolute error and the average percentage error computed from the EDR and reference (recorded) respiration signals for both the Fantasia and Apnea-ECG datasets as 1.27 and 9.3%, and 1.35 and 10.2%, respectively. In the experiments performed over different age group subjects of the Fantasia dataset, the proposed algorithm provided effective results in the younger population but outperformed the existing techniques in the case of elderly subjects. The proposed EDR technique has the advantages over existing techniques in terms of the better agreement in the respiratory rates and specifically, it reduces the need for an extra step required for the detection of fiducial points in the ECG for the estimation of respiration which makes the process effective and less-complex. The above performance results obtained from two different datasets validate that the proposed approach can be used for monitoring of the respiration using single-lead ECG.

  20. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    transformation models, such as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the...research project. References: 1. Bookstein FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern...Rohr K, Stiehl HS, Sprengel R, Buzug TM, Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions

  1. Improving the Diagnostic Specificity of CT for Early Detection of Lung Cancer: 4D CT-Based Pulmonary Nodule Elastometry

    DTIC Science & Technology

    2013-08-01

    as thin - plate spline (1-3) or elastic-body spline (4, 5), is locally controlled. One of the main motivations behind the use of B- spline ...FL. Principal warps: thin - plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence...Weese J, Kuhn MH. Landmark-based elastic registration using approximating thin - plate splines . IEEE Transactions on Medical Imaging. 2001;20(6):526-34

  2. Experimental Researches on the Durability Indicators and the Physiological Comfort of Fabrics using the Principal Component Analysis (PCA) Method

    NASA Astrophysics Data System (ADS)

    Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.

    2017-06-01

    The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.

  3. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  4. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  5. Decomposition of energetic molecules by interfacing with a catalytic oxide: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Wang, Fenggong; Tsyshevsky, Roman; Zverev, Anton; Mitrofanov, Anatoly; Kuklja, Maija

    Organic-inorganic interfaces provide both intrigues and opportunities for designing systems that possess properties and functionalities inaccessible by each individual component. In particular, mixing with a photocatalyst may significantly affect the adsorption, decomposition, and photoresponse of organic molecules. Here, we choose the formulation of TiO2 and trinitrotoluene (TNT), a highly catalytic oxide and a prominent explosive, as a prototypical example to explore the interaction at the interface on the photosensitivity of energetic materials. We show that, whether or not a catalytic oxide additive can help molecular decompositions under light illumination depends largely on the band alignment between the oxide surface and the energetic molecule. Furthermore, an oxygen vacancy can lead to the electron density transfer from the surface to the energetic molecules, causing an enhancement of the bonding between molecules and surface and a reduction of the molecular decomposition activation barriers.

  6. Determination of the thermal stability of perfluoroalkylethers

    NASA Technical Reports Server (NTRS)

    Helmick, Larry S.; Jones, William R., Jr.

    1990-01-01

    The thermal decomposition temperatures of several commercial and custom synthesized perfluoroalkylether fluids were determined with a computerized tensimeter. In general, the decomposition temperatures of the commercial fluids were all similar and significantly higher than those for custom synthesized fluids. Correlation of the decomposition temperatures with the molecular structures of the primary components of the commercial fluids revealed that the stability of the fluids is not affected by intrinsic factors such as carbon chain length, branching, or cumulated difluoroformal groups. Instead, correlation with extrinsic factors revealed that the stability may be limited by the presence of small quantities of thermally unstable material and/or chlorine-containing material arising from the use of chlorine-containing solvents during synthesis. Finally, correlation of decomposition temperatures with molecular weights for Demnum and Krytox fluids supports a chain cleavage reaction mechanism for Demnum fluids and an unzipping reaction mechanism for Krytox fluids.

  7. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    NASA Astrophysics Data System (ADS)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  8. The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.

    PubMed

    Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E

    2018-05-01

    In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  10. Adaptive Decomposition of Highly Resolved Time Series into Local and Non‐local Components

    EPA Science Inventory

    Highly time-resolved air monitoring data are widely being collected over long time horizons in order to characterizeambient and near-source air quality trends. In many applications, it is desirable to split the time-resolved data into two ormore components (e.g., local and region...

  11. Analysis and visualization of single-trial event-related potentials

    NASA Technical Reports Server (NTRS)

    Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.

    2001-01-01

    In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.

  12. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    ERIC Educational Resources Information Center

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  13. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  14. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  15. Burst and Principal Components Analyses of MEA Data for 16 Chemicals Describe at Least Three Effects Classes.

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in neuronal network function and have been used for neurotoxicity screening. As a proof-•of-concept, the current study assessed the utility of analytical "fingerprinting" using Principal Components Analysis (P...

  16. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  17. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  18. Thermal decomposition of wood: influence of wood components and cellulose crystallite size.

    PubMed

    Poletto, Matheus; Zattera, Ademir J; Forte, Maria M C; Santana, Ruth M C

    2012-04-01

    The influence of wood components and cellulose crystallinity on the thermal degradation behavior of different wood species has been investigated using thermogravimetry, chemical analysis and X-ray diffraction. Four wood samples, Pinus elliottii (PIE), Eucalyptus grandis (EUG), Mezilaurus itauba (ITA) and Dipteryx odorata (DIP) were used in this study. The results showed that higher extractives contents associated with lower crystallinity and lower cellulose crystallite size can accelerate the degradation process and reduce the wood thermal stability. On the other hand, the thermal decomposition of wood shifted to higher temperatures with increasing wood cellulose crystallinity and crystallite size. These results indicated that the cellulose crystallite size affects the thermal degradation temperature of wood species. Copyright © 2012. Published by Elsevier Ltd.

  19. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  20. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    NASA Astrophysics Data System (ADS)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  1. A roadmap for bridging basic and applied research in forensic entomology.

    PubMed

    Tomberlin, J K; Mohr, R; Benbow, M E; Tarone, A M; VanLaerhoven, S

    2011-01-01

    The National Research Council issued a report in 2009 that heavily criticized the forensic sciences. The report made several recommendations that if addressed would allow the forensic sciences to develop a stronger scientific foundation. We suggest a roadmap for decomposition ecology and forensic entomology hinging on a framework built on basic research concepts in ecology, evolution, and genetics. Unifying both basic and applied research fields under a common umbrella of terminology and structure would facilitate communication in the field and the production of scientific results. It would also help to identify novel research areas leading to a better understanding of principal underpinnings governing ecosystem structure, function, and evolution while increasing the accuracy of and ability to interpret entomological evidence collected from crime scenes. By following the proposed roadmap, a bridge can be built between basic and applied decomposition ecology research, culminating in science that could withstand the rigors of emerging legal and cultural expectations.

  2. Water dissociation in a radio-frequency electromagnetic field with ex situ electrodes—decomposition of perfluorooctanoic acid and tetrahydrofuran

    NASA Astrophysics Data System (ADS)

    Schneider, Jens; Holzer, Frank; Kraus, Markus; Kopinke, Frank-Dieter; Roland, Ulf

    2016-10-01

    The application of radio waves with a frequency of 13.56 MHz on electrolyte solutions in a capillary reactor led to the formation of reactive hydrogen and oxygen species and finally to molecular oxygen and hydrogen. This process of water splitting can be principally used for the elimination of hazardous chemicals in water. Two compounds, namely perfluorooctanoic acid (PFOA) and tetrahydrofuran, were converted using this process. Their main decomposition products were highly volatile and therefore transferred to a gas phase, where they could be identified by GC-MS analyses. It is remarkable that the chemical reactions could benefit from both the oxidizing and reducing species formed in the plasma process, which takes place in gas bubbles saturated with water vapor. The breaking of C-C and C-F bonds was proven in the case of PFOA, probably initiated by electron impacts and radical reactions.

  3. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  4. A principal components model of soundscape perception.

    PubMed

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  5. Plant, fungal, bacterial, and nitrogen interactions in the litter layer of a native Patagonian forest.

    PubMed

    Vivanco, Lucía; Rascovan, Nicolás; Austin, Amy T

    2018-01-01

    Plant-microbial interactions in the litter layer represent one of the most relevant interactions for biogeochemical cycling as litter decomposition is a key first step in carbon and nitrogen turnover. However, our understanding of these interactions in the litter layer remains elusive. In an old-growth mixed Nothofagus forest in Patagonia, we studied the effects of single tree species identity and the mixture of three tree species on the fungal and bacterial composition in the litter layer. We also evaluated the effects of nitrogen (N) addition on these plant-microbial interactions. In addition, we compared the magnitude of stimulation of litter decomposition due to home field advantage (HFA, decomposition occurs more rapidly when litter is placed beneath the plant species from which it had been derived than beneath a different plant species) and N addition that we previously demonstrated in this same forest, and used microbial information to interpret these results. Tree species identity had a strong and significant effect on the composition of fungal communities but not on the bacterial community of the litter layer. The microbial composition of the litter layer under the tree species mixture show an averaged contribution of each single tree species. N addition did not erase the plant species footprint on the fungal community, and neither altered the bacterial community. N addition stimulated litter decomposition as much as HFA for certain tree species, but the mechanisms behind N and HFA stimulation may have differed. Our results suggest that stimulation of decomposition from N addition might have occurred due to increased microbial activity without large changes in microbial community composition, while HFA may have resulted principally from plant species' effects on the litter fungal community. Together, our results suggest that plant-microbial interactions can be an unconsidered driver of litter decomposition in temperate forests.

  6. Litter Decomposition in a Semiarid Dune Grassland: Neutral Effect of Water Supply and Inhibitory Effect of Nitrogen Addition

    PubMed Central

    Li, Yulin; Ning, Zhiying; Cui, Duo; Mao, Wei; Bi, Jingdong; Zhao, Xueyong

    2016-01-01

    Background The decomposition of plant material in arid ecosystems is considered to be substantially controlled by water and N availability. The responses of litter decomposition to external N and water, however, remain controversial, and the interactive effects of supplementary N and water also have been largely unexamined. Methodology/Principal Findings A 3.5-year field experiment with supplementary nitrogen and water was conducted to assess the effects of N and water addition on mass loss and nitrogen release in leaves and fine roots of three dominant plant species (i.e., Artemisia halondendron, Setaria viridis, and Phragmites australis) with contrasting substrate chemistry (e.g. N concentration, lignin content in this study) in a desertified dune grassland of Inner Mongolia, China. The treatments included N addition, water addition, combination of N and water, and an untreated control. The decomposition rate in both leaves and roots was related to the initial litter N and lignin concentrations of the three species. However, litter quality did not explain the slower mass loss in roots than in leaves in the present study, and thus warrant further research. Nitrogen addition, either alone or in combination with water, significantly inhibited dry mass loss and N release in the leaves and roots of the three species, whereas water input had little effect on the decomposition of leaf litter and fine roots, suggesting that there was no interactive effect of supplementary N and water on litter decomposition in this system. Furthermore, our results clearly indicate that the inhibitory effects of external N on dry mass loss and nitrogen release are relatively strong in high-lignin litter compared with low-lignin litter. Conclusion/Significance These findings suggest that increasing precipitation hardly facilitates ecosystem carbon turnover but atmospheric N deposition can enhance carbon sequestration and nitrogen retention in desertified dune grasslands of northern China. Additionally, litter quality of plant species should be considered when modelling the carbon cycle and nutrient dynamics of this system. PMID:27617439

  7. Controls of Carbon Preservation in Coastal Wetlands of Texas: Mangrove vs. Saltmarsh Ecosystems

    NASA Astrophysics Data System (ADS)

    Sterne, A. M. E.; Louchouarn, P.; Norwood, M. J.; Kaiser, K.

    2014-12-01

    The estimated magnitude of the carbon (C) stocks contained in the first meter of US coastal wetland soils represents ~10% of the entire C stock in US soils (4 vs. 52 Pg, respectively). Because this stock extends to several meters below the surface for many coastal wetlands, it becomes paramount to understand the fate of C under ecosystem shifts, varying natural environmental constraints, and changing land use. In this project we analyze total hydrolysable carbohydrates, amino acids, phenols and stable isotopic data (δ13C) at two study sites located on the Texas coastline to investigate chemical compositions and the stage of decomposition in mangrove and marsh grass dominated wetlands. Carbohydrates are used as specific decomposition indicators of the polysaccharide component of wetland plants, whereas amino acids are used to identify the contribution of microbial biomass, and acid/aldehyde ratios of syringyl (S) and vanillyl (V) phenols (Ac/AlS,V) follow the decomposition of lignin. Preliminary results show carbohydrates account for 30-50 % of organic carbon in plant litter and surface sediments at both sites. Sharp declines of carbohydrate yields with depth occur parallel to increasing Ac/AlS,V ratios indicating substantial decomposition of both the polysaccharide and lignin components of litter detritus. Ecological differences (between marsh grass and mangrove dominated wetlands) are discussed to better constrain the role of litter biochemistry and ecological shifts on C preservation in these anoxic environments.

  8. Prediction of the Maximum Temperature for Life Based on the Stability of Metabolites to Decomposition in Water

    PubMed Central

    Bains, William; Xiao, Yao; Yu, Changyong

    2015-01-01

    The components of life must survive in a cell long enough to perform their function in that cell. Because the rate of attack by water increases with temperature, we can, in principle, predict a maximum temperature above which an active terrestrial metabolism cannot function by analysis of the decomposition rates of the components of life, and comparison of those rates with the metabolites’ minimum metabolic half-lives. The present study is a first step in this direction, providing an analytical framework and method, and analyzing the stability of 63 small molecule metabolites based on literature data. Assuming that attack by water follows a first order rate equation, we extracted decomposition rate constants from literature data and estimated their statistical reliability. The resulting rate equations were then used to give a measure of confidence in the half-life of the metabolite concerned at different temperatures. There is little reliable data on metabolite decomposition or hydrolysis rates in the literature, the data is mostly confined to a small number of classes of chemicals, and the data available are sometimes mutually contradictory because of varying reaction conditions. However, a preliminary analysis suggests that terrestrial biochemistry is limited to environments below ~150–180 °C. We comment briefly on why pressure is likely to have a small effect on this. PMID:25821932

  9. Component-specific modeling. [jet engine hot section components

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.

    1992-01-01

    Accomplishments are described for a 3 year program to develop methodology for component-specific modeling of aircraft hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models, (2) geometry model generators, (3) remeshing, (4) specialty three-dimensional inelastic structural analysis, (5) computationally efficient solvers, (6) adaptive solution strategies, (7) engine performance parameters/component response variables decomposition and synthesis, (8) integrated software architecture and development, and (9) validation cases for software developed.

  10. Fire affects root decomposition, soil food web structure, and carbon flow in tallgrass prairie

    NASA Astrophysics Data System (ADS)

    Shaw, E. Ashley; Denef, Karolien; Milano de Tomasel, Cecilia; Cotrufo, M. Francesca; Wall, Diana H.

    2016-05-01

    Root litter decomposition is a major component of carbon (C) cycling in grasslands, where it provides energy and nutrients for soil microbes and fauna. This is especially important in grasslands where fire is common and removes aboveground litter accumulation. In this study, we investigated whether fire affects root decomposition and C flow through the belowground food web. In a greenhouse experiment, we applied 13C-enriched big bluestem (Andropogon gerardii) root litter to intact tallgrass prairie soil cores collected from annually burned (AB) and infrequently burned (IB) treatments at the Konza Prairie Long Term Ecological Research (LTER) site. Incorporation of 13C into microbial phospholipid fatty acids and nematode trophic groups was measured on six occasions during a 180-day decomposition study to determine how C was translocated through the soil food web. Results showed significantly different soil communities between treatments and higher microbial abundance for IB. Root decomposition occurred rapidly and was significantly greater for AB. Microbes and their nematode consumers immediately assimilated root litter C in both treatments. Root litter C was preferentially incorporated in a few groups of microbes and nematodes, but depended on burn treatment: fungi, Gram-negative bacteria, Gram-positive bacteria, and fungivore nematodes for AB and only omnivore nematodes for IB. The overall microbial pool of root-litter-derived C significantly increased over time but was not significantly different between burn treatments. The nematode pool of root-litter-derived C also significantly increased over time, and was significantly higher for the AB treatment at 35 and 90 days after litter addition. In conclusion, the C flow from root litter to microbes to nematodes is not only measurable but also significant, indicating that higher nematode trophic levels are critical components of C flow during root decomposition, which, in turn, is significantly affected by fire. Not only does fire affect the soil community and root decomposition, but the lower microbial abundance, greater root turnover, and the increased incorporation of root litter C by microbes and nematodes for AB suggests that annual burning increases root-litter-derived C flow through the soil food web of the tallgrass prairie.

  11. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  12. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  13. Application of principal component analysis in protein unfolding: an all-atom molecular dynamics simulation study.

    PubMed

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-28

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  14. Application of principal component analysis in protein unfolding: An all-atom molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-01

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  15. SAS program for quantitative stratigraphic correlation by principal components

    USGS Publications Warehouse

    Hohn, M.E.

    1985-01-01

    A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.

  16. Implementation of an integrating sphere for the enhancement of noninvasive glucose detection using quantum cascade laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.

    2018-05-01

    An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.

  17. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    PubMed

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  18. Principals' Perceptions of Collegial Support as a Component of Administrative Inservice.

    ERIC Educational Resources Information Center

    Daresh, John C.

    To address the problem of increasing professional isolation of building administrators, the Principals' Inservice Project helps establish principals' collegial support groups across the nation. The groups are typically composed of 6 to 10 principals who meet at least once each month over a 2-year period. One collegial support group of seven…

  19. Training the Trainers: Learning to Be a Principal Supervisor

    ERIC Educational Resources Information Center

    Saltzman, Amy

    2017-01-01

    While most principal supervisors are former principals themselves, few come to the role with specific training in how to do the job effectively. For this reason, both the Washington, D.C., and Tulsa, Oklahoma, principal supervisor programs include a strong professional development component. In this article, the author takes a look inside these…

  20. Use of Geochemistry Data Collected by the Mars Exploration Rover Spirit in Gusev Crater to Teach Geomorphic Zonation through Principal Components Analysis

    ERIC Educational Resources Information Center

    Rodrigue, Christine M.

    2011-01-01

    This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…

  1. A Principal Components Analysis and Validation of the Coping with the College Environment Scale (CWCES)

    ERIC Educational Resources Information Center

    Ackermann, Margot Elise; Morrow, Jennifer Ann

    2008-01-01

    The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…

  2. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  3. Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.

    2013-06-01

    Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.

  4. Stability of Nonlinear Principal Components Analysis: An Empirical Study Using the Balanced Bootstrap

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.

    2007-01-01

    Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…

  5. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  6. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  7. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  8. 40 CFR 60.1580 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule...

  9. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  10. Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach

    ERIC Educational Resources Information Center

    Mukorera, Sophia; Nyatanga, Phocenah

    2017-01-01

    Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…

  11. Principal Perspectives about Policy Components and Practices for Reducing Cyberbullying in Urban Schools

    ERIC Educational Resources Information Center

    Hunley-Jenkins, Keisha Janine

    2012-01-01

    This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…

  12. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  13. Learning Principal Component Analysis by Using Data from Air Quality Networks

    ERIC Educational Resources Information Center

    Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia

    2017-01-01

    With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…

  14. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  15. Relationships between Association of Research Libraries (ARL) Statistics and Bibliometric Indicators: A Principal Components Analysis

    ERIC Educational Resources Information Center

    Hendrix, Dean

    2010-01-01

    This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…

  16. Principal component analysis for protein folding dynamics.

    PubMed

    Maisuradze, Gia G; Liwo, Adam; Scheraga, Harold A

    2009-01-09

    Protein folding is considered here by studying the dynamics of the folding of the triple beta-strand WW domain from the Formin-binding protein 28. Starting from the unfolded state and ending either in the native or nonnative conformational states, trajectories are generated with the coarse-grained united residue (UNRES) force field. The effectiveness of principal components analysis (PCA), an already established mathematical technique for finding global, correlated motions in atomic simulations of proteins, is evaluated here for coarse-grained trajectories. The problems related to PCA and their solutions are discussed. The folding and nonfolding of proteins are examined with free-energy landscapes. Detailed analyses of many folding and nonfolding trajectories at different temperatures show that PCA is very efficient for characterizing the general folding and nonfolding features of proteins. It is shown that the first principal component captures and describes in detail the dynamics of a system. Anomalous diffusion in the folding/nonfolding dynamics is examined by the mean-square displacement (MSD) and the fractional diffusion and fractional kinetic equations. The collisionless (or ballistic) behavior of a polypeptide undergoing Brownian motion along the first few principal components is accounted for.

  17. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters.

    PubMed

    Tao, Dapeng; Lin, Xu; Jin, Lianwen; Li, Xuelong

    2016-03-01

    Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.

  18. Dynamic of consumer groups and response of commodity markets by principal component analysis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  19. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  20. [Applications of three-dimensional fluorescence spectrum of dissolved organic matter to identification of red tide algae].

    PubMed

    Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao

    2011-01-01

    The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.

Top