Science.gov

Sample records for component analysis based

  1. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  2. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  3. Quantitative interpretation of mineral hyperspectral images based on principal component analysis and independent component analysis methods.

    PubMed

    Jiang, Xiping; Jiang, Yu; Wu, Fang; Wu, Fenghuang

    2014-01-01

    Interpretation of mineral hyperspectral images provides large amounts of high-dimensional data, which is often complicated by mixed pixels. The quantitative interpretation of hyperspectral images is known to be extremely difficult when three types of information are unknown, namely, the number of pure pixels, the spectrum of pure pixels, and the mixing matrix. The problem is made even more complex by the disturbance of noise. The key to interpreting abstract mineral component information, i.e., pixel unmixing and abundance inversion, is how to effectively reduce noise, dimension, and redundancy. A three-step procedure is developed in this study for quantitative interpretation of hyperspectral images. First, the principal component analysis (PCA) method can be used to process the pixel spectrum matrix and keep characteristic vectors with larger eigenvalues. This can effectively reduce the noise and redundancy, which facilitates the abstraction of major component information. Second, the independent component analysis (ICA) method can be used to identify and unmix the pixels based on the linear mixed model. Third, the pure-pixel spectrums can be normalized for abundance inversion, which gives the abundance of each pure pixel. In numerical experiments, both simulation data and actual data were used to demonstrate the performance of our three-step procedure. Under simulation data, the results of our procedure were compared with theoretical values. Under the actual data measured from core hyperspectral images, the results obtained through our algorithm are compared with those of similar software (Mineral Spectral Analysis 1.0, Nanjing Institute of Geology and Mineral Resources). The comparisons show that our method is effective and can provide reference for quantitative interpretation of hyperspectral images.

  4. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  5. Multichannel ECG data compression based on multiscale principal component analysis.

    PubMed

    Sharma, L N; Dandapat, S; Mahanta, Anil

    2012-07-01

    In this paper, multiscale principal component analysis (MSPCA) is proposed for multichannel electrocardiogram (MECG) data compression. In wavelet domain, principal components analysis (PCA) of multiscale multivariate matrices of multichannel signals helps reduce dimension and remove redundant information present in signals. The selection of principal components (PCs) is based on average fractional energy contribution of eigenvalue in a data matrix. Multichannel compression is implemented using uniform quantizer and entropy coding of PCA coefficients. The compressed signal quality is evaluated quantitatively using percentage root mean square difference (PRD), and wavelet energy-based diagnostic distortion (WEDD) measures. Using dataset from CSE multilead measurement library, multichannel compression ratio of 5.98:1 is found with PRD value 2.09% and the lowest WEDD value of 4.19%. Based on, gold standard subjective quality measure, the lowest mean opinion score error value of 5.56% is found.

  6. Background subtraction approach based on independent component analysis.

    PubMed

    Jiménez-Hernández, Hugo

    2010-01-01

    In this work, a new approach to background subtraction based on independent component analysis is presented. This approach assumes that background and foreground information are mixed in a given sequence of images. Then, foreground and background components are identified, if their probability density functions are separable from a mixed space. Afterwards, the components estimation process consists in calculating an unmixed matrix. The estimation of an unmixed matrix is based on a fast ICA algorithm, which is estimated as a Newton-Raphson maximization approach. Next, the motion components are represented by the mid-significant eigenvalues from the unmixed matrix. Finally, the results show the approach capabilities to detect efficiently motion in outdoors and indoors scenarios. The results show that the approach is robust to luminance conditions changes at scene.

  7. Phase-shifting interferometry based on principal component analysis.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-04-15

    An asynchronous phase-shifting method based on principal component analysis (PCA) is presented. No restrictions about the background, modulation, and phase shifts are necessary. The presented method is very fast and needs very low computational requirements, so it can be used with very large images and/or very large image sets. The method is based on obtaining two quadrature signals by the PCA algorithm. We have applied the proposed method to simulated and experimental interferograms, obtaining satisfactory results.

  8. Maximum flow-based resilience analysis: From component to system

    PubMed Central

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  9. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  10. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    PubMed

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2016-12-22

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  11. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  12. Robust principal component analysis based on maximum correntropy criterion.

    PubMed

    He, Ran; Hu, Bao-Gang; Zheng, Wei-Shi; Kong, Xiang-Wei

    2011-06-01

    Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L(1) norm when outliers occur.

  13. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  14. Principal component analysis based methodology to distinguish protein SERS spectra

    NASA Astrophysics Data System (ADS)

    Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.

    2011-05-01

    Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.

  15. Describing Crouzon and Pfeiffer syndrome based on principal component analysis.

    PubMed

    Staal, Femke C R; Ponniah, Allan J T; Angullia, Freida; Ruff, Clifford; Koudstaal, Maarten J; Dunaway, David

    2015-05-01

    Crouzon and Pfeiffer syndrome are syndromic craniosynostosis caused by specific mutations in the FGFR genes. Patients share the characteristics of a tall, flattened forehead, exorbitism, hypertelorism, maxillary hypoplasia and mandibular prognathism. Geometric morphometrics allows the identification of the global shape changes within and between the normal and syndromic population. Data from 27 Crouzon-Pfeiffer and 33 normal subjects were landmarked in order to compare both populations. With principal component analysis the variation within both groups was visualized and the vector of change was calculated. This model normalized a Crouzon-Pfeiffer skull and was compared to age-matched normative control data. PCA defined a vector that described the shape changes between both populations. Movies showed how the normal skull transformed into a Crouzon-Pfeiffer phenotype and vice versa. Comparing these results to established age-matched normal control data confirmed that our model could normalize a Crouzon-Pfeiffer skull. PCA was able to describe deformities associated with Crouzon-Pfeiffer syndrome and is a promising method to analyse variability in syndromic craniosynostosis. The virtual normalization of a Crouzon-Pfeiffer skull is useful to delineate the phenotypic changes required for correction, can help surgeons plan reconstructive surgery and is a potentially promising surgical outcome measure. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Principle component analysis based hyperspectral image fusion in imaging spectropolarimeter

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Zhang, Chunmin

    2017-02-01

    Image fusion is of great importance in object detection. A PCA based image fusion method was proposed. A pixel-level average method and a wavelet-based methods have been implemented for a comparison study. Different performance metrics without reference image are implemented to evaluate the performance of image fusion algorithms. It has been concluded that image fusion using PCA based method showed better performance.

  17. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  18. Independent component analysis based source number estimation and its comparison for mechanical systems

    NASA Astrophysics Data System (ADS)

    Cheng, Wei; Lee, Seungchul; Zhang, Zhousuo; He, Zhengjia

    2012-11-01

    It has been challenging to correctly separate the mixed signals into source components when the source number is not known a priori. In this paper, we propose a novel source number estimation based on independent component analysis (ICA) and clustering evaluation analysis. We investigate and benchmark three information based source number estimations: Akaike information criterion (AIC), minimum description length (MDL) and improved Bayesian information criterion (IBIC). All the above methods are comparatively studied in both numerical and experimental case studies with typical mechanical signals. The results demonstrate that the proposed ICA based source number estimation with nonlinear dissimilarity measures performs more stable and robust than the information based ones for mechanical systems.

  19. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  20. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  1. [A Composition Analysis Method of Mixed Pigments Based on Spectrum Expression and Independent Component Analysis].

    PubMed

    Wang, Gong-ming; Liu, Zhi-yong

    2015-06-01

    Reflectance spectrometry is a common method in composition analysis of mixed pigments. In this method, similarity is used to determine the type of basic pigments that constitute the mixed pigments. But its result may be inaccurate because it is easily influenced by a variety of basic pigments. In this study, a composition analysis method of mixed pigments based on spectrum expression and independent component analysis is proposed, and the composition of mixed pigments can be calculated accurately. First of all, the spectral information of mixed pigments is obtained with spectrometer, and is expressed as the discrete signal. After that, the spectral information of basic pigments is deduced with independent component analysis. Then, the types of basic pigments are determined by calculating the spectrum similarity between the basic pigments and known pigments. Finally, the ratios of basic pigments are obtained by solving the Kubelka-Munk equation system. In addition, the simulated spectrum data of Munsell color card is used to validate this method. The compositions of mixed pigments from three basic pigments are determined under the circumstance of normality and disturbance. And the compositions of mixture from several pigments within the set of eight basic pigments are deduced successfully. The curves of separated pigment spectrums are very similar to the curves of original pigment spectrums. The average similarity is 97.72%, and the maximum one can reach to 99.95%. The calculated ratios of basic pigments close to the original one. It can be seen that this method is suitable for composition analysis of mixed pigments.

  2. Formal Methods for Quality of Service Analysis in Component-Based Distributed Computing

    DTIC Science & Technology

    2003-12-01

    Component-Based Software Architecture is a promising solution for distributed computing . To develop high quality software, analysis of non-functional...based distributed computing is proposed and represented formally using Two-Level Grammar (TLG), an object-oriented formal specification language. TLG

  3. Identification of pure component spectra by independent component analysis in glucose prediction based on mid-infrared spectroscopy.

    PubMed

    Hahn, Sangjoon; Yoon, Gilwon

    2006-11-10

    We present a method for glucose prediction from mid-IR spectra by independent component analysis (ICA). This method is able to identify pure, or individual, absorption spectra of constituent components from the mixture spectra without a priori knowledge of the mixture. This method was tested with a two-component system consisting of an aqueous solution of both glucose and sucrose, which exhibit distinct but closely overlapped spectra. ICA combined with principal component analysis was able to identify a spectrum for each component, the correct number of components, and the concentrations of the components in the mixture. This method does not need a calibration process and is advantageous in noninvasive glucose monitoring since expensive and time-consuming clinical tests for data calibration are not required.

  4. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  5. Reduction of a collisional-radiative mechanism for argon plasma based on principal component analysis

    SciTech Connect

    Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.

    2015-06-15

    This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.

  6. Independent component feature-based human activity recognition via Linear Discriminant Analysis and Hidden Markov Model.

    PubMed

    Uddin, Md; Lee, J J; Kim, T S

    2008-01-01

    In proactive computing, human activity recognition from image sequences is an active research area. This paper presents a novel approach of human activity recognition based on Linear Discriminant Analysis (LDA) of Independent Component (IC) features from shape information. With extracted features, Hidden Markov Model (HMM) is applied for training and recognition. The recognition performance using LDA of IC features has been compared to other approaches including Principle Component Analysis (PCA), LDA of PC, and ICA. The preliminary results show much improved performance in the recognition rate with our proposed method.

  7. Dependent component analysis based approach to robust demarcation of skin tumors

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2009-02-01

    Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.

  8. Batch process monitoring based on multiple-phase online sorting principal component analysis.

    PubMed

    Lv, Zhaomin; Yan, Xuefeng; Jiang, Qingchao

    2016-09-01

    Existing phase-based batch or fed-batch process monitoring strategies generally have two problems: (1) phase number, which is difficult to determine, and (2) uneven length feature of data. In this study, a multiple-phase online sorting principal component analysis modeling strategy (MPOSPCA) is proposed to monitor multiple-phase batch processes online. Based on all batches of off-line normal data, a new multiple-phase partition algorithm is proposed, where k-means and a defined average Euclidean radius are employed to determine the multiple-phase data set and phase number. Principal component analysis is then applied to build the model in each phase, and all the components are retained. In online monitoring, the Euclidean distance is used to select the monitoring model. All the components undergo online sorting through a parameter defined by Bayesian inference (BI). The first several components are retained to calculate the T(2) statistics. Finally, the respective probability indices of [Formula: see text] is obtained using BI as the moving average strategy. The feasibility and effectiveness of MPOSPCA are demonstrated through a simple numerical example and the fed-batch penicillin fermentation process.

  9. Seislet-based morphological component analysis using scale-dependent exponential shrinkage

    NASA Astrophysics Data System (ADS)

    Yang, Pengliang; Fomel, Sergey

    2015-07-01

    Morphological component analysis (MCA) is a powerful tool used in image processing to separate different geometrical components (cartoons and textures, curves and points etc.). MCA is based on the observation that many complex signals may not be sparsely represented using only one dictionary/transform, however can have sparse representation by combining several over-complete dictionaries/transforms. In this paper we propose seislet-based MCA for seismic data processing. MCA algorithm is reformulated in the shaping-regularization framework. Successful seislet-based MCA depends on reliable slope estimation of seismic events, which is done by plane-wave destruction (PWD) filters. An exponential shrinkage operator unifies many existing thresholding operators and is adopted in scale-dependent shaping regularization to promote sparsity. Numerical examples demonstrate a superior performance of the proposed exponential shrinkage operator and the potential of seislet-based MCA in application to trace interpolation and multiple removal.

  10. A novel concealed information test method based on independent component analysis and support vector machine.

    PubMed

    Gao, Junfeng; Lu, Liang; Yang, Yong; Yu, Gang; Na, Liantao; Rao, NiNi

    2012-01-01

    The concealed information test (CIT) has drawn much attention and has been widely investigated in recent years. In this study, a novel CIT method based on denoised P3 and machine learning was proposed to improve the accuracy of lie detection. Thirty participants were chosen as the guilty and innocent participants to perform the paradigms of 3 types of stimuli. The electroencephalogram (EEG) signals were recorded and separated into many single trials. In order to enhance the signal noise ratio (SNR) of P3 components, the independent component analysis (ICA) method was adopted to separate non-P3 components (i.e., artifacts) from every single trial. In order to automatically identify the P3 independent components (ICs), a new method based on topography template was proposed to automatically identify the P3 ICs. Then the P3 waveforms with high SNR were reconstructed on Pz electrodes. Second, the 3 groups of features based on time,frequency, and wavelets were extracted from the reconstructed P3 waveforms. Finally, 2 classes of feature samples were used to train a support vector machine (SVM) classifier because it has higher performance compared with several other classifiers. Meanwhile, the optimal number of P3 ICs and some other parameter values in the classifiers were determined by the cross-validation procedures. The presented method achieved a balance test accuracy of 84.29% on detecting P3 components for the guilty and innocent participants. The presented method improves the efficiency of CIT in comparison with previous reported methods.

  11. Improved gene prediction by principal component analysis based autoregressive Yule-Walker method.

    PubMed

    Roy, Manidipa; Barman, Soma

    2016-01-10

    Spectral analysis using Fourier techniques is popular with gene prediction because of its simplicity. Model-based autoregressive (AR) spectral estimation gives better resolution even for small DNA segments but selection of appropriate model order is a critical issue. In this article a technique has been proposed where Yule-Walker autoregressive (YW-AR) process is combined with principal component analysis (PCA) for reduction in dimensionality. The spectral peaks of DNA signal are used to detect protein-coding regions based on the 1/3 frequency component. Here optimal model order selection is no more critical as noise is removed by PCA prior to power spectral density (PSD) estimation. Eigenvalue-ratio is used to find the threshold between signal and noise subspaces for data reduction. Superiority of proposed method over fast Fourier Transform (FFT) method and autoregressive method combined with wavelet packet transform (WPT) is established with the help of receiver operating characteristics (ROC) and discrimination measure (DM) respectively.

  12. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    PubMed

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  13. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  14. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  15. [In vitro transdermal delivery of the active fraction of xiangfusiwu decoction based on principal component analysis].

    PubMed

    Li, Zhen-Hao; Liu, Pei; Qian, Da-Wei; Li, Wei; Shang, Er-Xin; Duan, Jin-Ao

    2013-06-01

    The objective of the present study was to establish a method based on principal component analysis (PCA) for the study of transdermal delivery of multiple components in Chinese medicine, and to choose the best penetration enhancers for the active fraction of Xiangfusiwu decoction (BW) with this method. Improved Franz diffusion cells with isolated rat abdomen skins were carried out to experiment on the transdermal delivery of six active components, including ferulic acid, paeoniflorin, albiflorin, protopine, tetrahydropalmatine and tetrahydrocolumbamine. The concentrations of these components were determined by LC-MS/MS, then the total factor scores of the concentrations at different times were calculated using PCA and were employed instead of the concentrations to compute the cumulative amounts and steady fluxes, the latter of which were considered as the indexes for optimizing penetration enhancers. The results showed that compared to the control group, the steady fluxes of the other groups increased significantly and furthermore, 4% azone with 1% propylene glycol manifested the best effect. The six components could penetrate through skin well under the action of penetration enhancers. The method established in this study has been proved to be suitable for the study of transdermal delivery of multiple components, and it provided a scientific basis for preparation research of Xiangfusiwu decoction and moreover, it could be a reference for Chinese medicine research.

  16. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  17. Image-based pupil plane characterization via principal component analysis for EUVL tools

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-03-01

    We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.

  18. Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study

    NASA Astrophysics Data System (ADS)

    Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas

    2009-02-01

    We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.

  19. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter.

  20. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  1. Independent component analysis-based source-level hyperlink analysis for two-person neuroscience studies

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-02-01

    Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.

  2. Inverting geodetic time series with a principal component analysis-based inversion method

    NASA Astrophysics Data System (ADS)

    Kositsky, A. P.; Avouac, J.-P.

    2010-03-01

    The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.

  3. Aberration measurement based on principal component analysis of aerial images of optimized marks

    NASA Astrophysics Data System (ADS)

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo

    2014-10-01

    We propose an aberration measurement technique based on principal component analysis of aerial images of optimized marks (AMAI-OM). Zernike aberrations are retrieved using a linear relationship between the aerial image and Zernike coefficients. The linear relationship is composed of the principal components (PCs) and regression matrix. A centering process is introduced to compensate position offsets of the measured aerial image. A new test mark is designed in order to improve the centering accuracy and theoretical accuracy of aberration measurement together. The new test marks are composed of three spaces with different widths, and their parameters are optimized by using an accuracy evaluation function. The offsets of the measured aerial image are compensated in the centering process and the adjusted PC coefficients are obtained. Then the Zernike coefficients are calculated according to these PC coefficients using a least square method. The simulations using the lithography simulators PROLITH and Dr.LiTHO validate the accuracy of our method. Compared with the previous aberration measurement technique based on principal component analysis of aerial image (AMAI-PCA), the measurement accuracy of Zernike aberrations under the real measurement condition of the aerial image is improved by about 50%.

  4. Classification of EEG based-mental fatigue using principal component analysis and Bayesian neural network.

    PubMed

    Rifai Chai; Tran, Yvonne; Naik, Ganesh R; Nguyen, Tuan N; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T

    2016-08-01

    This paper presents an electroencephalography (EEG) based-classification of between pre- and post-mental load tasks for mental fatigue detection from 65 healthy participants. During the data collection, eye closed and eye open tasks were collected before and after conducting the mental load tasks. For the computational intelligence, the system uses the combination of principal component analysis (PCA) as the dimension reduction method of the original 26 channels of EEG data, power spectral density (PSD) as feature extractor and Bayesian neural network (BNN) as classifier. After applying the PCA, the dimension of the data is reduced from 26 EEG channels in 6 principal components (PCs) with above 90% of information retained. Based on this reduced dimension of 6 PCs of data, during eyes open, the classification pre-task (alert) vs. post-task (fatigue) using Bayesian neural network resulted in sensitivity of 76.8 %, specificity of 75.1% and accuracy of 76% Also based on data from the 6 PCs, during eye closed, the classification between pre- and post-task resulted in a sensitivity of 76.1%, specificity of 74.5% and accuracy of 75.3%. Further, the classification results of using only 6 PCs data are comparable to the result using the original 26 EEG channels. This finding will help in reducing the computational complexity of data analysis based on 26 channels of EEG for mental fatigue detection.

  5. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-01

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  6. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis

    PubMed Central

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-01-01

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance. PMID:27681452

  7. [Component analysis of complex mixed solution based on multidimensional diffuse reflectance spectroscopy].

    PubMed

    Li, Gang; Xiong, Chan; Zhao, Li-ying; Lin, Ling; Tong, Ying; Zhang, Bao-ju

    2012-02-01

    In the present paper, the authors proposed a method for component analysis of complex mixed solutions based on multidimensional diffuse reflectance spectroscopy by analyzing the information carried by spectrum signals from various optical properties of various components of the analyte. The experiment instrument was designed with supercontinuum laser source, the motorized precision translation stage and the spectrometer. The Intralipid-20% was taken as an analyte, and was diluted over a range of 1%-20% in distilled water. The diffuse reflectance spectrum signal was measured at 24 points within the distance of 1.5-13 mm (at an interval of 0.5 mm) above the incidence point. The partial least squares algorithm model was used to perform a modeling and forecasting analysis for the spectral analysis data collected from single-point and multi-point. The results showed that the most accurate calibration model was created by the spectral data acquired from the nearest 1-13 points above the incident point; the most accurate prediction model was created by the spectral signal acquired from the nearest 1-7 points above the incident point. It was proved that multidimensional diffuse reflectance spectroscopy can improve the spectral signal to noise ratio. Compared with the traditional spectrum technology using a single optical property such as absorbance or reflectance, this method increased the impact of scattering characteristics of the analyte. So the use of a variety of optical properties of the analytes can make an improvement of the accuracy of the modeling and forecasting, and also provide a basis for component analysis of the complex mixed solution based on multidimensional diffuse reflectance spectroscopy.

  8. Principal components analysis.

    PubMed

    Groth, Detlef; Hartmann, Stefanie; Klie, Sebastian; Selbig, Joachim

    2013-01-01

    Principal components analysis (PCA) is a standard tool in multivariate data analysis to reduce the number of dimensions, while retaining as much as possible of the data's variation. Instead of investigating thousands of original variables, the first few components containing the majority of the data's variation are explored. The visualization and statistical analysis of these new variables, the principal components, can help to find similarities and differences between samples. Important original variables that are the major contributors to the first few components can be discovered as well.This chapter seeks to deliver a conceptual understanding of PCA as well as a mathematical description. We describe how PCA can be used to analyze different datasets, and we include practical code examples. Possible shortcomings of the methodology and ways to overcome these problems are also discussed.

  9. Analysis of active components in Salvia miltiorrhiza injection based on vascular endothelial cell protection.

    PubMed

    Shen, Jie; Yang, Kai; Sun, Caihua; Zheng, Minxia

    2014-09-01

    Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI). HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties.

  10. NURBS-based isogeometric analysis for the computation of flows about rotating components

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Hughes, T. J. R.

    2008-12-01

    The ability of non-uniform rational B-splines (NURBS) to exactly represent circular geometries makes NURBS-based isogeometric analysis attractive for applications involving flows around and/or induced by rotating components (e.g., submarine and surface ship propellers). The advantage over standard finite element discretizations is that rotating components may be introduced into a stationary flow domain without geometric incompatibility. Although geometric compatibility is exactly achieved, the discretization of the flow velocity and pressure remains incompatible at the interface between the stationary and rotating subdomains. This incompatibility is handled by using a weak enforcement of the continuity of solution fields at the interface of the stationary and rotating subdomains.

  11. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    PubMed

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations. © The Author(s) 2015.

  12. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging.

    PubMed

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  14. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  15. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  16. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  17. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  18. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  19. Cardiac autonomic changes in middle-aged women: identification based on principal component analysis.

    PubMed

    Trevizani, Gabriela A; Nasario-Junior, Olivassé; Benchimol-Barbosa, Paulo R; Silva, Lilian P; Nadal, Jurandir

    2016-07-01

    The purpose of this study was to investigate the application of the principal component analysis (PCA) technique on power spectral density function (PSD) of consecutive normal RR intervals (iRR) aiming at assessing its ability to discriminate healthy women according to age groups: young group (20-25 year-old) and middle-aged group (40-60 year-old). Thirty healthy and non-smoking female volunteers were investigated (13 young [mean ± SD (median): 22·8 ± 0·9 years (23·0)] and 17 Middle-aged [51·7 ± 5·3 years (50·0)]). The iRR sequence was collected during ten minutes, breathing spontaneously, in supine position and in the morning, using a heart rate monitor. After selecting an iRR segment (5 min) with the smallest variance, an auto regressive model was used to estimate the PSD. Five principal component coefficients, extracted from PSD signals, were retained for analysis according to the Mahalanobis distance classifier. A threshold established by logistic regression allowed the separation of the groups with 100% specificity, 83·2% sensitivity and 93·3% total accuracy. The PCA appropriately classified two groups of women in relation to age (young and Middle-aged) based on PSD analysis of consecutive normal RR intervals. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  20. Component-Based Analysis of Fault-Tolerant Real-Time Programs

    DTIC Science & Technology

    2007-01-01

    context of component-based design of fault-tolerant real-time programs. Regarding completeness, there are two main issues in such component-based...to determine whether such components exist in fault-tolerant programs irrespective of how they are designed. Regarding the first issue , previously, in...computation that stutters σn+1 infinitely if there is any other computation of p that extends α. Notation. For simplicity, we use the pseudo-arithmetic

  1. The use of principal component and cluster analysis to differentiate banana peel flours based on their starch and dietary fibre components.

    PubMed

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-08-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food.

  2. The Use of Principal Component and Cluster Analysis to Differentiate Banana Peel Flours Based on Their Starch and Dietary Fibre Components

    PubMed Central

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-01-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193

  3. Independent component analysis of instantaneous power-based fMRI.

    PubMed

    Zhong, Yuan; Zheng, Gang; Liu, Yijun; Lu, Guangming

    2014-01-01

    In functional magnetic resonance imaging (fMRI) studies using spatial independent component analysis (sICA) method, a model of "latent variables" is often employed, which is based on the assumption that fMRI data are linear mixtures of statistically independent signals. However, actual fMRI signals are nonlinear and do not automatically meet with the requirement of sICA. To provide a better solution to this problem, we proposed a novel approach termed instantaneous power based fMRI (ip-fMRI) for regularization of fMRI data. Given that the instantaneous power of fMRI signals is a scalar value, it should be a linear mixture that naturally satisfies the "latent variables" model. Based on our simulated data, the curves of accuracy and resulting receiver-operating characteristic curves indicate that the proposed approach is superior to the traditional fMRI in terms of accuracy and specificity by using sICA. Experimental results from human subjects have shown that spatial components of a hand movement task-induced activation reveal a brain network more specific to motor function by ip-fMRI than that by the traditional fMRI. We conclude that ICA decomposition of ip-fMRI may be used to localize energy signal changes in the brain and may have a potential to be applied to detection of brain activity.

  4. Image edge detection based tool condition monitoring with morphological component analysis.

    PubMed

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. [Study on an epilepsy EEG processing system based on independent component analysis (ICA)].

    PubMed

    Liu, Tie-bing; Tang, Li-ming; Wu, Min

    2007-01-01

    An epilepsy EEG processing system designed based on Independent Component Analysis (ICA) is introduced in the paper, in order to meet the requirements of epileptic screening in clinical medicine. The system adopts module structure and virtual instrument technology, its hardware is mainly configured in the signal-detecting box, data are exchanged between the box and the notebook computer or personal digital assistant (PDA) through certain digital interface, and ICA is executed in the notebook computer or PDA. The virtual EEG processing system based on ICA is able not only to perform ICA algorithms easily, but also to improve the developing efficiency greatly, and it is suitable for disease screening on a large scale.

  6. GeoPCA: a new tool for multivariate analysis of dihedral angles based on principal component geodesics

    PubMed Central

    Sargsyan, Karen; Wright, Jon; Lim, Carmay

    2012-01-01

    The GeoPCA package is the first tool developed for multivariate analysis of dihedral angles based on principal component geodesics. Principal component geodesic analysis provides a natural generalization of principal component analysis for data distributed in non-Euclidean space, as in the case of angular data. GeoPCA presents projection of angular data on a sphere composed of the first two principal component geodesics, allowing clustering based on dihedral angles as opposed to Cartesian coordinates. It also provides a measure of the similarity between input structures based on only dihedral angles, in analogy to the root-mean-square deviation of atoms based on Cartesian coordinates. The principal component geodesic approach is shown herein to reproduce clusters of nucleotides observed in an η–θ plot. GeoPCA can be accessed via http://pca.limlab.ibms.sinica.edu.tw. PMID:22139913

  7. [Geographical distribution of left ventricular Tei index based on principal component analysis].

    PubMed

    Xu, Jinhui; Ge, Miao; He, Jinwei; Xue, Ranyin; Yang, Shaofang; Jiang, Jilin

    2014-11-01

    To provide a scientific standard of left ventricular Tei index for healthy people from various region of China, and to lay a reliable foundation for the evaluation of left ventricular diastolic and systolic function. The correlation and principal component analysis were used to explore the left ventricular Tei index, which based on the data of 3 562 samples from 50 regions of China by means of literature retrieval. Th e nine geographical factors were longitude(X₁), latitude(X₂), altitude(X₃), annual sunshine hours (X₄), the annual average temperature (X₅), annual average relative humidity (X₆), annual precipitation (X₇), annual temperature range (X₈) and annual average wind speed (X₉). ArcGIS soft ware was applied to calculate the spatial distribution regularities of left ventricular Tei index. There is a significant correlation between the healthy people's left ventricular Tei index and geographical factors, and the correlation coefficients were -0.107 (r₁), -0.301 (r₂), -0.029 (r₃), -0.277 (r₄), -0.256(r₅), -0.289(r₆), -0.320(r₇), -0.310 (r₈) and -0.117 (r₉), respectively. A linear equation between the Tei index and the geographical factor was obtained by regression analysis based on the three extracting principal components. The geographical distribution tendency chart for healthy people's left Tei index was fitted out by the ArcGIS spatial interpolation analysis. The geographical distribution for left ventricular Tei index in China follows certain pattern. The reference value in North is higher than that in South, while the value in East is higher than that in West.

  8. Adaptive clutter filtering based on sparse component analysis in ultrasound color flow imaging.

    PubMed

    Li, Peng; Yang, Xiaofeng; Zhang, Dalong; Bian, Zhengzhong

    2008-07-01

    An adaptive method based on the sparse component analysis is proposed for stronger clutter filtering in ultrasound color flow imaging (CFI). In the present method, the focal underdetermined system solver (FOCUSS) algorithm is employed, and the iteration of the algorithm is based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. By finding the localized energy solution vector representing strong clutter components, the FOCUSS algorithm first extracts the clutter from the original signal. However, the different initialization of the basis function matrix has an impact on the filtering performance of FOCUSS algorithms. Thus, 2 FOCUSS clutter- filtering methods, the original and the modified, are obtained by initializing the basis function matrix using a predetermined set of monotone sinusoids and using the discrete Karhunen-Loeve transform (DKLT) and spatial averaging, respectively. Validation of 2 FOCUSS filtering methods has been performed through experimental tests, in which they were compared with several conventional clutter filters using simplistic simulated and gathered clinical data. The results demonstrate that 2 FOCUSS filtering methods can follow signal varying adaptively and perform clutter filtering effectively. Moreover, the modified method may obtain the further improved filtering performance and retain more blood flow information in regions close to vessel walls.

  9. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  10. Damage localization in linear-form structures based on sensitivity investigation for principal component analysis

    NASA Astrophysics Data System (ADS)

    Viet Ha, Nguyen; Golinval, Jean-Claude

    2010-10-01

    This paper addresses the problem of damage detection and localization in linear-form structures. Principal component analysis (PCA) is a popular technique for dynamic system investigation. The aim of the paper is to present a damage diagnosis method based on sensitivities of PCA results in the frequency domain. Starting from frequency response functions (FRFs) measured at different locations on the structure; PCA is performed to determine the main features of the signals. Sensitivities of principal directions obtained from PCA to structural parameters are then computed and inspected according to the location of sensors; their variation from the healthy state to the damaged state indicates damage locations. It is worth noting that damage localization is performed without the need of modal identification. Influences of some features as noise, choice of parameter and number of sensors are discussed. The efficiency and limitations of the proposed method are illustrated using numerical and real-world examples.

  11. A pipeline for copy number variation detection based on principal component analysis.

    PubMed

    Chen, Jiayu; Liu, Jingyu; Boutte, David; Calhoun, Vince D

    2011-01-01

    DNA copy number variation (CNV), an important structural variation, is known to be pervasive in the human genome and the determination of CNVs is essential to understanding their potential effects on the susceptibility to diseases. However, CNV detection using SNP array data is challenging due to the low signal-to-noise ratio. In this study, we propose a principal component analysis (PCA) based approach for data correction, and present a novel processing pipeline for reliable CNV detection. Tested data include both simulated and real SNP array datasets. Simulations demonstrate a substantial reduction in the false positive rate of CNV detection after PCA-correction. And we also observe a significant improvement in data quality in real SNP array data after correction.

  12. A Multi-Fault Diagnosis Method for Sensor Systems Based on Principle Component Analysis

    PubMed Central

    Zhu, Daqi; Bai, Jie; Yang, Simon X.

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time. PMID:22315537

  13. A remote sensing image fusion method based on feedback sparse component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Yu, Xianchuan; Pei, Wenjing; Hu, Dan; Zhang, Libao

    2015-12-01

    We propose a new remote sensing image (RSI) fusion technique based on sparse blind source separation theory. Our method employs feedback sparse component analysis (FSCA), which can extract the original image in a step-by-step manner and is robust against noise. For RSIs from the China-Brazil Earth Resources Satellite, FSCA can separate useful surface feature information from redundant information and noise. The FSCA algorithm is therefore used to develop two RSI fusion schemes: one focuses on fusing high-resolution and multi-spectral images, while the other fuses synthetic aperture radar bands. The experimental results show that the proposed method can preserve spectral and spatial details of the source images. For certain evaluation indexes, our method performs better than classical fusion methods.

  14. An image reconstruction algorithm for electrical capacitance tomography based on robust principle component analysis.

    PubMed

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-02-05

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm.

  15. A multi-fault diagnosis method for sensor systems based on principle component analysis.

    PubMed

    Zhu, Daqi; Bai, Jie; Yang, Simon X

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time.

  16. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data.

    PubMed

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks.

  17. Sex-based differences in lifting technique under increasing load conditions: A principal component analysis.

    PubMed

    Sheppard, P S; Stevenson, J M; Graham, R B

    2016-05-01

    The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. Copyright © 2016. Published by Elsevier Ltd.

  18. Diagnosis of Compound Fault Using Sparsity Promoted-Based Sparse Component Analysis.

    PubMed

    Hao, Yansong; Song, Liuyang; Ke, Yanliang; Wang, Huaqing; Chen, Peng

    2017-06-06

    Compound faults often occur in rotating machinery, which increases the difficulty of fault diagnosis. In this case, blind source separation, which usually includes independent component analysis (ICA) and sparse component analysis (SCA), was proposed to separate mixed signals. SCA, which is based on the sparsity of target signals, was developed to sever the compound faults and effectively diagnose the fault due to its advantage over ICA in underdetermined conditions. However, there is an issue regarding the vibration signals, which are inadequately sparse, and it is difficult to represent them in a sparse way. Accordingly, to overcome the above-mentioned problem, a sparsity-promoted approach named wavelet modulus maxima is applied to obtain the sparse observation signal. Then, the potential function is utilized to estimate the number of source signals and the mixed matrix based on the sparse signal. Finally, the separation of the source signals can be achieved according to the shortest path method. To validate the effectiveness of the proposed method, the simulated signals and vibration signals measured from faulty roller bearings are used. The faults that occur in a roller bearing are the outer-race flaw, the inner-race flaw and the rolling element flaw. The results show that the fault features acquired using the proposed approach are evidently close to the theoretical values. For instance, the inner-race feature frequency 101.3 Hz is very similar to the theoretical calculation 101 Hz. Therefore, it is effective to achieve the separation of compound faults utilizing the suggest method, even in underdetermined cases. In addition, a comparison is applied to prove that the proposed method outperforms the traditional SCA method when the vibration signals are inadequate.

  19. Application of independent component analysis in target trajectory prediction based on moving platform

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Mao, Yao; Gan, Xun; Tian, Jing

    2015-10-01

    In Electro-Optical tracking systems, compound control is used to keep high-precision tracking of fast targets by predicting the trajectory of the target. Traditional ground based Electro-Optical tracking system uses encoder data and target missing quantity read from image sensors to achieve the target trajectory by using prediction filtering techniques. Compared with the traditional ground based systems, relative angle between the tracking system and the ground cannot be read directly from encoder data in an Electro -Optical tracking system based on moving platform. Thus the combination of inertial sensors' data and target missing quantity is required to composite the trajectory of targets. However, the output of the inertial sensors contains not only the information of the target's motion, but also the residual error of vibration suppression. The existence of vibration suppression residual error affects the trajectory prediction accuracy, thereby reducing compensation precision and the stability of the compound control system. Independent component analysis (ICA) method that can effectively separate the source signals from the measurement signals is introduced to target trajectory prediction field in this paper. An experimental system based on the method is built by settling a small dual-axis disturbance platform, which is taken as the stable platform, on a large dual-axis disturbance platform, which is used to simulate the motion of the moving platform. The result shows that the vibration residual is separated and subtracted from the combined motion data. The target motion is therefore obtained and the feasibility of the method is proved .

  20. Metabolic module mining based on Independent Component Analysis in Arabidopsis thaliana.

    PubMed

    Han, Xiao; Chen, Cong; Hyun, Tae Kyung; Kumar, Ritesh; Kim, Jae-Yean

    2012-09-01

    Independent Component Analysis (ICA) has been introduced as one of the useful tools for gene-functional discovery in animals. However, this approach has been poorly utilized in the plant sciences. In the present study, we have exploited ICA combined with pathway enrichment analysis to address the statistical challenges associated with genome-wide analysis in plant system. To generate an Arabidopsis metabolic platform, we collected 4,373 Affy-metrix ATH1 microarray datasets. Out of the 3,232 metabolic genes and transcription factors, 99.47% of these genes were identified in at least one component, indicating the coverage of most of the metabolic pathways by the components. During the metabolic pathway enrichment analysis, we found components that indicate an independent regulation between the isoprenoid biosynthesis pathways. We also utilized this analysis tool to investigate some transcription factors involved in secondary cell wall biogenesis. This approach has identified remarkably more transcription factors compared to previously reported analysis tools. A website providing user-friendly searching and downloading of the entire dataset analyzed by ICA is available at http://kimjy.gnu.ac.kr/ICA.files/slide0002.htm . ICA combined with pathway enrichment analysis might provide a powerful approach for the extraction of the components responsible for a biological process of interest in plant systems.

  1. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  2. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods.

    PubMed

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-15

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    NASA Astrophysics Data System (ADS)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  4. Contact- and distance-based principal component analysis of protein dynamics

    SciTech Connect

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  5. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    PubMed Central

    2010-01-01

    Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036

  6. Principal components analysis based control of a multi-DoF underactuated prosthetic hand.

    PubMed

    Matrone, Giulia C; Cipriani, Christian; Secco, Emanuele L; Magenes, Giovanni; Carrozza, Maria Chiara

    2010-04-23

    Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.

  7. Robust principal component analysis-based four-dimensional computed tomography.

    PubMed

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-07

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.

  8. A method for developing biomechanical response corridors based on principal component analysis.

    PubMed

    Sun, W; Jin, J H; Reed, M P; Gayzik, F S; Danelson, K A; Bass, C R; Zhang, J Y; Rupp, J D

    2016-10-03

    The standard method for specifying target responses for human surrogates, such as crash test dummies and human computational models, involves developing a corridor based on the distribution of a set of empirical mechanical responses. These responses are commonly normalized to account for the effects of subject body shape, size, and mass on impact response. Limitations of this method arise from the normalization techniques, which are based on the assumptions that human geometry linearly scales with size and in some cases, on simple mechanical models. To address these limitations, a new method was developed for corridor generation that applies principal component (PC) analysis to align response histories. Rather than use normalization techniques to account for the effects of subject size on impact response, linear regression models are used to model the relationship between PC features and subject characteristics. Corridors are generated using Monte Carlo simulation based on estimated distributions of PC features for each PC. This method is applied to pelvis impact force data from a recent series of lateral impact tests to develop corridor bounds for a group of signals associated with a particular subject size. Comparing to the two most common methods for response normalization, the corridors generated by the new method are narrower and better retain the features in signals that are related to subject size and body shape.

  9. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  10. Principal component analysis (PCA)-based k-nearest neighbor (k-NN) analysis of colonic mucosal tissue fluorescence spectra.

    PubMed

    Kamath, Sudha D; Mahato, Krishna K

    2009-08-01

    The objective of this study was to verify the suitability of principal component analysis (PCA)-based k-nearest neighbor (k-NN) analysis for discriminating normal and malignant autofluorescence spectra of colonic mucosal tissues. Autofluorescence spectroscopy, a noninvasive technique, has high specificity and sensitivity for discrimination of diseased and nondiseased colonic tissues. Previously, we assessed the efficacy of the technique on colonic data using PCA Match/No match and Artificial Neural Networks (ANNs) analyses. To improve the classification reliability, the present work was conducted using PCA-based k-NN analysis and was compared with previously obtained results. A total of 115 fluorescence spectra (69 normal and 46 malignant) were recorded from 13 normal and 10 malignant colonic tissues with 325 nm pulsed laser excitation in the spectral region 350-600 nm in vitro. We applied PCA to extract the relevant information from the spectra and used a nonparametric k-NN analysis for classification. The normal and malignant spectra showed large variations in shape and intensity. Statistically significant differences were found between normal and malignant classes. The performance of the analysis was evaluated by calculating the statistical parameters specificity and sensitivity, which were found to be 100% and 91.3%, respectively. The results obtained in this study showed good discrimination between normal and malignant conditions using PCA-based k-NN analysis.

  11. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    Background In contemporary biology, complex biological processes are increasingly studied by collecting and analyzing measurements of the same entities that are collected with different analytical platforms. Such data comprise a number of data blocks that are coupled via a common mode. The goal of collecting this type of data is to discover biological mechanisms that underlie the behavior of the variables in the different data blocks. The simultaneous component analysis (SCA) family of data analysis methods is suited for this task. However, a SCA may be hampered by the data blocks being subjected to different amounts of measurement error, or noise. To unveil the true mechanisms underlying the data, it could be fruitful to take noise heterogeneity into consideration in the data analysis. Maximum likelihood based SCA (MxLSCA-P) was developed for this purpose. In a previous simulation study it outperformed normal SCA-P. This previous study, however, did not mimic in many respects typical functional genomics data sets, such as, data blocks coupled via the experimental mode, more variables than experimental units, and medium to high correlations between variables. Here, we present a new simulation study in which the usefulness of MxLSCA-P compared to ordinary SCA-P is evaluated within a typical functional genomics setting. Subsequently, the performance of the two methods is evaluated by analysis of a real life Escherichia coli metabolomics data set. Results In the simulation study, MxLSCA-P outperforms SCA-P in terms of recovery of the true underlying scores of the common mode and of the true values underlying the data entries. MxLSCA-P further performed especially better when the simulated data blocks were subject to different noise levels. In the analysis of an E. coli metabolomics data set, MxLSCA-P provided a slightly better and more consistent interpretation. Conclusion MxLSCA-P is a promising addition to the SCA family. The analysis of coupled functional genomics

  12. Reconstruction of transcriptional regulatory networks by stability-based network component analysis.

    PubMed

    Chen, Xi; Xuan, Jianhua; Wang, Chen; Shajahan, Ayesha N; Riggins, Rebecca B; Clarke, Robert

    2013-01-01

    Reliable inference of transcription regulatory networks is a challenging task in computational biology. Network component analysis (NCA) has become a powerful scheme to uncover regulatory networks behind complex biological processes. However, the performance of NCA is impaired by the high rate of false connections in binding information. In this paper, we integrate stability analysis with NCA to form a novel scheme, namely stability-based NCA (sNCA), for regulatory network identification. The method mainly addresses the inconsistency between gene expression data and binding motif information. Small perturbations are introduced to prior regulatory network, and the distance among multiple estimated transcript factor (TF) activities is computed to reflect the stability for each TF's binding network. For target gene identification, multivariate regression and t-statistic are used to calculate the significance for each TF-gene connection. Simulation studies are conducted and the experimental results show that sNCA can achieve an improved and robust performance in TF identification as compared to NCA. The approach for target gene identification is also demonstrated to be suitable for identifying true connections between TFs and their target genes. Furthermore, we have successfully applied sNCA to breast cancer data to uncover the role of TFs in regulating endocrine resistance in breast cancer.

  13. SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components

    SciTech Connect

    Wendt, R

    2014-06-15

    Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research

  14. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods.

  15. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  16. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis

    PubMed Central

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  17. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  18. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods.

  19. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  20. Recursive principal components analysis.

    PubMed

    Voegtlin, Thomas

    2005-10-01

    A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.

  1. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    SciTech Connect

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  2. The Langat River water quality index based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Mohd Ali, Zalina; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan

    2013-04-01

    River Water Quality Index (WQI) is calculated using an aggregation function of the six water quality sub-indices variables, together with their relative importance or weights respectively. The formula is used by the Department of Environment to indicate a general status of the rivers in Malaysia. The six elected water quality variables used in the formula are, namely: suspended solids (SS), biochemical oxygen demand (BOD), ammoniacal nitrogen (AN), chemical oxygen demand (COD), dissolved oxygen (DO) and pH. The sub-indices calculations, determined by quality rating curve and their weights, were based on expert opinions. However, the use of sub-indices and the relative importance established in the formula is very subjective in nature and does not consider the inter-relationships among the variables. The relationships of the variables are important due to the nature of multi-dimensionality and complex characteristics found in river water. Therefore, a well-known multivariate technique, i.e. Principal Component Analysis (PCA) was proposed to re-calculate the waterquality index specifically in Langat River based on the inter-relationship approach. The application of this approach is not well-studied in river water quality index development studies in Malaysia. Hence, the approach in the study is relevant and important since the first river water quality development took place in 1981. The PCA results showed that the weights obtained indicate the difference in ranking of the relative importance for particular variables compared to the classical approaches used in WQI-DOE. Based on the new weights, the Langat River water quality index was calculated and the comparison between both indexes was also discussed in this paper.

  3. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  4. Day-ahead crude oil price forecasting using a novel morphological component analysis based model.

    PubMed

    Zhu, Qing; He, Kaijian; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  5. Moving object detection based on on-line block-robust principal component analysis decomposition

    NASA Astrophysics Data System (ADS)

    Yang, Biao; Cao, Jinmeng; Zou, Ling

    2017-07-01

    Robust principal component analysis (RPCA) decomposition is widely applied in moving object detection due to its ability in suppressing environmental noises while separating sparse foreground from low rank background. However, it may suffer from constant punishing parameters (resulting in confusion between foreground and background) and holistic processing of all input frames (leading to bad real-time performance). Improvements to these issues are studied in this paper. A block-RPCA decomposition approach was proposed to handle the confusion while separating foreground from background. Input frame was initially separated into blocks using three-frame difference. Then, punishing parameter of each block was computed by its motion saliency acquired based on selective spatio-temporal interesting points. Aiming to improve the real-time performance of the proposed method, an on-line solution to block-RPCA decomposition was utilized. Both qualitative and quantitative tests were implemented and the results indicate the superiority of our method to some state-of-the-art approaches in detection accuracy or real-time performance, or both of them.

  6. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  7. Development and application of a time-history analysis for rotorcraft dynamics based on a component approach

    NASA Technical Reports Server (NTRS)

    Sopher, R.; Hallock, D. W.

    1985-01-01

    A time history analysis for rotorcraft dynamics based on dynamical substructures, and nonstructural mathematical and aerodynamic components is described. The analysis is applied to predict helicopter ground resonance and response to rotor damage. Other applications illustrate the stability and steady vibratory response of stopped and gimballed rotors, representative of new technology. Desirable attributes expected from modern codes are realized, although the analysis does not employ a complete set of techniques identified for advanced software. The analysis is able to handle a comprehensive set of steady state and stability problems with a small library of components.

  8. Cistanches identification based on fluorescent spectral imaging technology combined with principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Dong, Jia; Huang, Furong; Li, Yuanpeng; Xiao, Chi; Xian, Ruiyi; Ma, Zhiguo

    2015-03-01

    In this study, fluorescent spectral imaging technology combined with principal component analysis (PCA) and artificial neural networks (ANNs) was used to identify Cistanche deserticola, Cistanche tubulosa and Cistanche sinensis, which are traditional Chinese medicinal herbs. The fluorescence spectroscopy imaging system acquired the spectral images of 40 cistanche samples, and through image denoising, binarization processing to make sure the effective pixels. Furthermore, drew the spectral curves whose data in the wavelength range of 450-680 nm for the study. Then preprocessed the data by first-order derivative, analyzed the data through principal component analysis and artificial neural network. The results shows: Principal component analysis can generally distinguish cistanches, through further identification by neural networks makes the results more accurate, the correct rate of the testing and training sets is as high as 100%. Based on the fluorescence spectral imaging technique and combined with principal component analysis and artificial neural network to identify cistanches is feasible.

  9. Towards Zero Retraining for Myoelectric Control Based on Common Model Component Analysis.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Zhu, Xiangyang

    2016-04-01

    In spite of several decades of intense research and development, the existing algorithms of myoelectric pattern recognition (MPR) are yet to satisfy the criteria that a practical upper extremity prostheses should fulfill. This study focuses on the criterion of the short, or even zero subject training. Due to the inherent nonstationarity in surface electromyography (sEMG) signals, current myoelectric control algorithms usually need to be retrained daily during a multiple days' usage. This study was conducted based on the hypothesis that there exist some invariant characteristics in the sEMG signals when a subject performs the same motion in different days. Therefore, given a set of classifiers (models) trained on several days, it is possible to find common characteristics among them. To this end, we proposed to use common model component analysis (CMCA) framework, in which an optimized projection was found to minimize the dissimilarity among multiple models of linear discriminant analysis (LDA) trained using data from different days. Five intact-limbed subjects and two transradial amputee subjects participated in an experiment including six sessions of sEMG data recording, which were performed in six different days, to simulate the application of MPR over multiple days. The results demonstrate that CMCA has a significant better generalization ability with unseen data (not included in the training data), leading to classification accuracy improvement and increase of completion rate in a motion test simulation, when comparing with the baseline reference method. The results indicate that CMCA holds a great potential in the effort of developing zero retraining of MPR.

  10. SU-F-BRA-13: Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Based On Principle Component Analysis

    SciTech Connect

    Roper, J; Bradshaw, B; Godette, K; Schreibmann, E; Chanyavanich, V

    2015-06-15

    Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process and then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.

  11. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  12. Development of a graded index microlens based fiber optical trap and its characterization using principal component analysis

    PubMed Central

    Nylk, J.; Kristensen, M. V. G.; Mazilu, M.; Thayil, A. K.; Mitchell, C. A.; Campbell, E. C.; Powis, S. J.; Gunn-Moore, F. J.; Dholakia, K.

    2015-01-01

    We demonstrate a miniaturized single beam fiber optical trapping probe based on a high numerical aperture graded index (GRIN) micro-objective lens. This enables optical trapping at a distance of 200μm from the probe tip. The fiber trapping probe is characterized experimentally using power spectral density analysis and an original approach based on principal component analysis for accurate particle tracking. Its use for biomedical microscopy is demonstrated through optically mediated immunological synapse formation. PMID:25909032

  13. [Qualitative analysis of chemical constituents in Si-Wu Decoction based on TCM component database].

    PubMed

    Wang, Zhen-fang; Zhao, Yang; Fan, Zi-quan; Kang, Li-ping; Qiao, Li-rui; Zhang, Jie; Gao, Yue; Ma, Bai-ping

    2015-10-01

    In order to clarify the chemical constituents of Si-Wu Decoction rapidly and holistically, we analyzed the ethanol extract of Si-Wu Decoction by UPLC/Q-TOF-MSE and UNIFI which based on traditional Chinese medicine database, the probable structures of 113 compounds were identified. The results show that this method can rapidly and effectively characterize the chemical compounds of Si-Wu Decoction and provide a new solution for identification of components from complex TCM extract.

  14. Temperature extraction in Brillouin optical time-domain analysis sensors using principal component analysis based pattern recognition.

    PubMed

    Azad, Abul Kalam; Khan, Faisal Nadeem; Alarashi, Waled Hussein; Guo, Nan; Lau, Alan Pak Tao; Lu, Chao

    2017-07-10

    We propose and experimentally demonstrate the use of principal component analysis (PCA) based pattern recognition to extract temperature distribution from the measured Brillouin gain spectra (BGSs) along the fiber under test (FUT) obtained by Brillouin optical time domain analysis (BOTDA) system. The proposed scheme employs a reference database consisting of relevant ideal BGSs with known temperature attributes. PCA is then applied to the BGSs in the reference database as well as to the measured BGSs so as to reduce their size by extracting their most significant features. Now, for each feature vector of the measured BGS, we determine its best match in the reference database comprised of numerous reduced-size feature vectors of the ideal BGSs. The known temperature attribute corresponding to the best-matched BGS in the reference database is then taken as the extracted temperature of the measured BGS. We analyzed the performance of PCA-based pattern recognition algorithm in detail and compared it with that of curve fitting method. The experimental results validate that the proposed technique can provide better accuracy, faster processing speed and larger noise tolerance for the measured BGSs. Therefore, the proposed PCA-based pattern recognition algorithm can be considered as an attractive method for extracting temperature distributions along the fiber in BOTDA sensors.

  15. Unsupervised spectral classification of astronomical x-ray sources based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Mu, Bo

    By virtue of the sensitivity of the XMM-Newton and Chandra X-ray telescopes, astronomers are capable of probing increasingly faint X-ray sources in the universe. On the other hand, we have to face a tremendous amount of X-ray imaging data collected by these observatories. We developed an efficient framework to classify astronomical X-ray sources through natural grouping of their reduced dimensionality profiles, which can faithfully represent the high dimensional spectral information. X-ray imaging spectral extraction techniques, which use standard astronomical software (e.g., SAS, FTOOLS and CIAO), provide an efficient means to investigate multiple X-ray sources in one or more observations at the same time. After applying independent component analysis (ICA), the high-dimensional spectra can be expressed by reduced dimensionality profiles in an independent space. An infrared spectral data set obtained for the stars in the Large Magellanic Cloud,observed by the Spitzer Space Telescope Infrared Spectrograph, has been used to test the unsupervised classification algorithms. The least classification error is achieved by the hierarchical clustering algorithm with the average linkage of the data, in which each spectrum is scaled by its maximum amplitude. Then we applied a similar hierarchical clustering algorithm based on ICA to a deep XMM-Newton X-ray observation of the field of the eruptive young star V1647 Ori. Our classification method establishes that V1647 Ori is a spectrally distinct X-ray source in this field. Finally, we classified the Xray sources in the central field of a large survey, the Subaru/XMM-Newton deep survey, which contains a large population of high-redshift extragalactic sources. A small group of sources with maximum spectral peak above 1 keV are easily picked out from the spectral data set, and these sources appear to be associated with active galaxies. In general, these experiments confirm that our classification framework is an efficient X

  16. A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use

    PubMed Central

    Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien

    2016-01-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  17. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy.

  18. Structural damage continuous monitoring by using a data driven approach based on principal component analysis and cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid

    2017-05-01

    Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.

  19. Bio-inspired controller for a dexterous prosthetic hand based on Principal Components Analysis.

    PubMed

    Matrone, G; Cipriani, C; Secco, E L; Carrozza, M C; Magenes, G

    2009-01-01

    Controlling a dexterous myoelectric prosthetic hand with many degrees of freedom (DoFs) could be a very demanding task, which requires the amputee for high concentration and ability in modulating many different muscular contraction signals. In this work a new approach to multi-DoF control is proposed, which makes use of Principal Component Analysis (PCA) to reduce the DoFs space dimensionality and allow to drive a 15 DoFs hand by means of a 2 DoFs signal. This approach has been tested and properly adapted to work onto the underactuated robotic hand named CyberHand, using mouse cursor coordinates as input signals and a principal components (PCs) matrix taken from the literature. First trials show the feasibility of performing grasps using this method. Further tests with real EMG signals are foreseen.

  20. [Research of Electroencephalogram for Sleep Stage Based on Collaborative Representation and Kernel Entropy Component Analysis].

    PubMed

    Zhao, Panbo; Shi, Jun; Liu, Xiao; Jiang, Qikun; Gu, Yu

    2015-08-01

    Sleep quality is closely related to human health. It is very important to correctly discriminate the sleep stages for evaluating sleep quality, diagnosing and analyzing the sleep-related disorders. Polysomnography (PSG) signals are commonly used to record and analyze sleep stages. Effective feature extraction and representation is one of the most important steps to improve the performance of sleep stage classification. In this work, a collaborative representation (CR) algorithm was adopted to re-represent the original extracted features from electroencephalogram sig- nal, and then the kernel entropy component analysis (KECA) algorithm was further used to reduce the feature dimension of CR-feature. To evaluate the performance of CR-KECA, we compared the original feature, CR feature and readied CR feature (CR-PCA) after principal component analysis (PCA). The experimental results of sleep stage classification indicated that the CR-KECA method achieved the best performance compared with the original feature, CR feature, and CR-PCA feature with the classification accuracy of 68.74 +/- 0.46%, sensitivity of 68.76 +/- 0.43% and specificity of 92.19 +/- 0.11%. Moreover, CR algorithm had low computational complexity, and the feature dimension after KECA was much smaller, which made CR-KECA algorithm suitable for the analysis of large-scale sleep data.

  1. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  2. FPGA-based real-time blind source separation with principal component analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Matthew; Meyer-Baese, Uwe

    2015-05-01

    Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.

  3. Principal Components Analysis Based Unsupervised Feature Extraction Applied to Gene Expression Analysis of Blood from Dengue Haemorrhagic Fever Patients

    PubMed Central

    Taguchi, Y-h.

    2017-01-01

    Dengue haemorrhagic fever (DHF) sometimes occurs after recovery from the disease caused by Dengue virus (DENV), and is often fatal. However, the mechanism of DHF has not been determined, possibly because no suitable methodologies are available to analyse this disease. Therefore, more innovative methods are required to analyse the gene expression profiles of DENV-infected patients. Principal components analysis (PCA)-based unsupervised feature extraction (FE) was applied to the gene expression profiles of DENV-infected patients, and an integrated analysis of two independent data sets identified 46 genes as critical for DHF progression. PCA using only these 46 genes rendered the two data sets highly consistent. The application of PCA to the 46 genes of an independent third data set successfully predicted the progression of DHF. A fourth in vitro data set confirmed the identification of the 46 genes. These 46 genes included interferon- and heme-biosynthesis-related genes. The former are enriched in binding sites for STAT1, STAT2, and IRF1, which are associated with DHF-promoting antibody-dependent enhancement, whereas the latter are considered to be related to the dysfunction of spliceosomes, which may mediate haemorrhage. These results are outcomes that other type of bioinformatic analysis could hardly achieve. PMID:28276456

  4. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis.

    PubMed

    Lin, Yuan-Pin; Jao, Ping-Keng; Yang, Yi-Hsuan

    2017-01-01

    Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG) signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability), arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA)-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness) using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S) enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18) and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40%) as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing) neither achieved the cross-day classification (the accuracy was around chance level) nor replicated the encouraging improvement due to the inter-day EEG variability. This result demonstrated the

  5. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    PubMed Central

    Lin, Yuan-Pin; Jao, Ping-Keng; Yang, Yi-Hsuan

    2017-01-01

    Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG) signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability), arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA)-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness) using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S) enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18) and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40%) as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing) neither achieved the cross-day classification (the accuracy was around chance level) nor replicated the encouraging improvement due to the inter-day EEG variability. This result demonstrated the

  6. A method for dynamic spectrophotometric measurements in vivo using principal component analysis-based spectral deconvolution.

    PubMed

    Zupancic, Gregor

    2003-10-01

    A method was developed for dynamic spectrophotometric measurements in vivo in the presence of non-specific spectral changes due to external disturbances. This method was used to measure changes in mitochondrial respiratory pigment redox states in photoreceptor cells of live, white-eyed mutants of the blowfly Calliphora vicina. The changes were brought about by exchanging the atmosphere around an immobilised animal from air to N2 and back again by a rapid gas exchange system. During an experiment reflectance spectra were measured by a linear CCD array spectrophotometer. This method involves the pre-processing steps of difference spectra calculation and digital filtering in one and two dimensions. These were followed by time-domain principal component analysis (PCA). PCA yielded seven significant time domain principal component vectors and seven corresponding spectral score vectors. In addition, through PCA we also obtained a time course of changes common to all wavelengths-the residual vector, corresponding to non-specific spectral changes due to preparation movement or mitochondrial swelling. In the final step the redox state time courses were obtained by fitting linear combinations of respiratory pigment difference spectra to each of the seven score vectors. The resulting matrix of factors was then multiplied by the matrix of seven principal component vectors to yield the time courses of respiratory pigment redox states. The method can be used, with minor modifications, in many cases of time-resolved optical measurements of multiple overlapping spectral components, especially in situations where non-specific external influences cannot be disregarded.

  7. Metabolic distance estimation based on principle component analysis of metabolic turnover.

    PubMed

    Nakayama, Yasumune; Putri, Sastia P; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-09-01

    Visualization of metabolic dynamism is important for various types of metabolic studies including studies on optimization of bio-production processes and studies of metabolism-related diseases. Many methodologies have been developed for metabolic studies. Among these, metabolic turnover analysis (MTA) is often used to analyze metabolic dynamics. MTA involves observation of changes in the isotopomer ratio of metabolites over time following introduction of isotope-labeled substrates. MTA has several advantages compared with (13)C-metabolic flux analysis, including the diversity of applicable samples, the variety of isotope tracers, and the wide range of target pathways. However, MTA produces highly complex data from which mining useful information becomes difficult. For easy understanding of MTA data, a new approach was developed using principal component analysis (PCA). The resulting PCA score plot visualizes the metabolic distance, which is defined as distance between metabolites on the real metabolic map. And the score plot gives us some hints of interesting metabolism for further study. We used this method to analyze the central metabolism of Saccharomyces cerevisiae under moderated aerobic conditions, and time course data for 77 isotopomers of 14 metabolites were obtained. The PCA score plot for this dataset represented a metabolic map and indicated interesting phenomena such as activity of fumarate reductase under aerated condition. These findings show the importance of a multivariate analysis to MTA. In addition, because the approach is not biased, this method has potential application for analysis of less-studied pathways and organisms.

  8. Brain responses to emotional stimuli during breath holding and hypoxia: an approach based on the independent component analysis.

    PubMed

    Menicucci, Danilo; Artoni, Fiorenzo; Bedini, Remo; Pingitore, Alessandro; Passera, Mirko; Landi, Alberto; L'Abbate, Antonio; Sebastiani, Laura; Gemignani, Angelo

    2014-11-01

    Voluntary breath holding represents a physiological model of hypoxia. It consists of two phases of oxygen saturation dynamics: an initial slow decrease (normoxic phase) followed by a rapid drop (hypoxic phase) during which transitory neurological symptoms as well as slight impairment of integrated cerebral functions, such as emotional processing, can occur. This study investigated how breath holding affects emotional processing. To this aim we characterized the modulation of event-related potentials (ERPs) evoked by emotional-laden pictures as a function of breath holding time course. We recorded ERPs during free breathing and breath holding performed in air by elite apnea divers. We modeled brain responses during free breathing with four independent components distributed over different brain areas derived by an approach based on the independent component analysis (ICASSO). We described ERP changes during breath holding by estimating amplitude scaling and time shifting of the same components (component adaptation analysis). Component 1 included the main EEG features of emotional processing, had a posterior localization and did not change during breath holding; component 2, localized over temporo-frontal regions, was present only in unpleasant stimuli responses and decreased during breath holding, with no differences between breath holding phases; component 3, localized on the fronto-central midline regions, showed phase-independent breath holding decreases; component 4, quite widespread but with frontal prevalence, decreased in parallel with the hypoxic trend. The spatial localization of these components was compatible with a set of processing modules that affects the automatic and intentional controls of attention. The reduction of unpleasant-related ERP components suggests that the evaluation of aversive and/or possibly dangerous situations might be altered during breath holding.

  9. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  10. Online Handwritten Signature Verification Using Neural Network Classifier Based on Principal Component Analysis

    PubMed Central

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  11. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%.

  12. [Fetal electrocardiogram extraction based on independent component analysis and quantum particle swarm optimizer algorithm].

    PubMed

    Du, Yanqin; Huang, Hua

    2011-10-01

    Fetal electrocardiogram (FECG) is an objective index of the activities of fetal cardiac electrophysiology. The acquired FECG is interfered by maternal electrocardiogram (MECG). How to extract the fetus ECG quickly and effectively has become an important research topic. During the non-invasive FECG extraction algorithms, independent component analysis(ICA) algorithm is considered as the best method, but the existing algorithms of obtaining the decomposition of the convergence properties of the matrix do not work effectively. Quantum particle swarm optimization (QPSO) is an intelligent optimization algorithm converging in the global. In order to extract the FECG signal effectively and quickly, we propose a method combining ICA and QPSO. The results show that this approach can extract the useful signal more clearly and accurately than other non-invasive methods.

  13. Use of principal component analysis for differentiation of gelatine sources based on polypeptide molecular weights.

    PubMed

    Nur Azira, T; Che Man, Y B; Raja Mohd Hafidz, R N; Aina, M A; Amin, I

    2014-05-15

    The study was aimed to differentiate between porcine and bovine gelatines in adulterated samples by utilising sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) combined with principal component analysis (PCA). The distinct polypeptide patterns of 6 porcine type A and 6 bovine type B gelatines at molecular weight ranged from 50 to 220 kDa were studied. Experimental samples of raw gelatine were prepared by adding porcine gelatine in a proportion ranging from 5% to 50% (v/v) to bovine gelatine and vice versa. The method used was able to detect 5% porcine gelatine added to the bovine gelatine. There were no differences in the electrophoretic profiles of the jelly samples when the proteins were extracted with an acetone precipitation method. The simple approach employing SDS-PAGE and PCA reported in this paper may provide a useful tool for food authenticity issues concerning gelatine.

  14. Plant-wide process monitoring based on mutual information-multiblock principal component analysis.

    PubMed

    Jiang, Qingchao; Yan, Xuefeng

    2014-09-01

    Multiblock principal component analysis (MBPCA) methods are gaining increasing attentions in monitoring plant-wide processes. Generally, MBPCA assumes that some process knowledge is incorporated for block division; however, process knowledge is not always available. A new totally data-driven MBPCA method, which employs mutual information (MI) to divide the blocks automatically, has been proposed. By constructing sub-blocks using MI, the division not only considers linear correlations between variables, but also takes into account non-linear relations thereby involving more statistical information. The PCA models in sub-blocks reflect more local behaviors of process, and the results in all blocks are combined together by support vector data description. The proposed method is implemented on a numerical process and the Tennessee Eastman process. Monitoring results demonstrate the feasibility and efficiency.

  15. [Algae identification research based on fluorescence spectral imaging technology combined with cluster analysis and principal component analysis].

    PubMed

    Liang, Man; Huang, Fu-rong; He, Xue-jia; Chen, Xing-dan

    2014-08-01

    In order to explore rapid real-time algae detection methods, in the present study experiments were carried out to use fluorescence spectral imaging technology combined with a pattern recognition method for identification research of different types of algae. The fluorescence effect of algae samples is obvious during the detection. The fluorescence spectral imaging system was adopted to collect spectral images of 40 algal samples. Through image denoising, binarization processing and making sure the effective pixels, the spectral curves of each sample were drawn according to the spectral cube. The spectra in the 400-720 nm wavelength range were obtained. Then, two pattern recognition methods, i.e., hierarchical cluster analysis and principal component analysis, were used to process the spectral data. The hierarchical cluster analysis results showed that the Euclidean distance method and average weighted method were used to calculate the cluster distance between samples, and the samples could be correctly classified at a level of the distance L=2.452 or above, with an accuracy of 100%. The principal component analysis results showed that first-order derivative, second-order derivative, multiplicative scatter correction, standard normal variate and other pretreatments were carried out on raw spectral data, then principal component analysis was conducted, among which the identification effect after the second-order derivative pretreatment was shown to be the most effective, and eight types of algae samples were independently distributed in the principal component eigenspace. It was thus shown that it was feasible to use fluorescence spectral imaging technology combined with cluster analysis and principal component analysis for algae identification. The method had the characteristics of being easy to operate, fast and nondestructive.

  16. A Novel Principal Component Analysis-Based Acceleration Scheme for LES-ODT: An A Priori Study

    NASA Astrophysics Data System (ADS)

    Echekki, Tarek; Mirgolbabaei, Hessan

    2012-11-01

    A parameterization of the composition space based on principal component analysis (PCA) is proposed to represent the transport equations with the one-dimensional turbulence (ODT) solutions of a hybrid large-eddy simulation (LES) and ODT scheme. An a priori validation of the proposed approach is implemented based on stand-alone ODT solutions of the Sandia Flame F flame, which is characterized by different regimes of combustion starting with pilot stabilization, to extinction and reignition and self-stabilized combustion. The PCA analysis is carried out with a full set of the thermo-chemical scalars' vector as well as a subset of this vector. The subset is made up primarily of major species and temperature. The results show that the different regimes are reproduced using only three principal components for the thermo-chemical scalars based on the full and a subset of the thermo-chemical scalars' vector. Reproduction of the source term of the principal component represents a greater challenge. It is found that using the subset of the thermo-chemical scalars' vector both minor species and the first three principal components source terms are reasonably well predicted.

  17. A robust principal component analysis algorithm for EEG-based vigilance estimation.

    PubMed

    Shi, Li-Chen; Duan, Ruo-Nan; Lu, Bao-Liang

    2013-01-01

    Feature dimensionality reduction methods with robustness have a great significance for making better use of EEG data, since EEG features are usually high-dimensional and contain a lot of noise. In this paper, a robust principal component analysis (PCA) algorithm is introduced to reduce the dimension of EEG features for vigilance estimation. The performance is compared with that of standard PCA, L1-norm PCA, sparse PCA, and robust PCA in feature dimension reduction on an EEG data set of twenty-three subjects. To evaluate the performance of these algorithms, smoothed differential entropy features are used as the vigilance related EEG features. Experimental results demonstrate that the robustness and performance of robust PCA are better than other algorithms for both off-line and on-line vigilance estimation. The average RMSE (root mean square errors) of vigilance estimation was 0.158 when robust PCA was applied to reduce the dimensionality of features, while the average RMSE was 0.172 when standard PCA was used in the same task.

  18. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  19. EMG-based muscle fatigue assessment during dynamic contractions using principal component analysis.

    PubMed

    Rogers, Daniel R; MacIsaac, Dawn T

    2011-10-01

    A novel approach to fatigue assessment during dynamic contractions was proposed which projected multiple surface myoelectric parameters onto the vector connecting the temporal start and end points in feature-space in order to extract the long-term trend information. The proposed end to end (ETE) projection was compared to traditional principal component analysis (PCA) as well as neural-network implementations of linear (LPCA) and non-linear PCA (NLPCA). Nine healthy participants completed two repetitions of fatigue tests during isometric, cyclic and random fatiguing contractions of the biceps brachii. The fatigue assessments were evaluated in terms of a modified sensitivity to variability ratio (SVR) and each method used a set of time-domain and frequency-domain features which maximized the SVR. It was shown that there was no statistical difference among ETE, PCA and LPCA (p>0.99) and that all three outperformed NLPCA (p<0.0022). Future work will include a broader comparison of these methods to other new and established fatigue indices. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Application of Principal Component Analysis to Classify Textile Fibers Based on UV-Vis Diffuse Reflectance Spectroscopy

    NASA Astrophysics Data System (ADS)

    Wang, C.; Chen, Q.; Hussain, M.; Wu, S.; Chen, J.; Tang, Z.

    2017-07-01

    This study provides a new approach to the classification of textile fibers by using principal component analysis (PCA), based on UV-Vis diffuse reflectance spectroscopy (UV-Vis DRS). Different natural and synthetic fibers such as cotton, wool, silk, linen, viscose, and polyester were used. The spectrum of each kind of fiber was scanned by a spectrometer equipped with an integrating sphere. The characteristics of their UV-Vis diffuse reflectance spectra were analyzed. PCA revealed that the first three components represented 99.17% of the total variability in the ultraviolet region. Principal component score scatter plot (PC1 × PC2) of each fiber indicated the accuracy of this classification for these six varieties of fibers. Therefore, it was demonstrated that UV diffuse reflectance spectroscopy can be used as a novel approach to rapid, real-time, fiber identification.

  1. Single-subject independent component analysis-based intensity normalization in non-quantitative multi-modal structural MRI.

    PubMed

    Papazoglou, Sebastian; Würfel, Jens; Paul, Friedemann; Brandt, Alexander U; Scheel, Michael

    2017-04-22

    Non-quantitative MRI is prone to intersubject intensity variation rendering signal intensity level based analyses limited. Here, we propose a method that fuses non-quantitative routine T1-weighted (T1w), T2w, and T2w fluid-saturated inversion recovery sequences using independent component analysis and validate it on age and sex matched healthy controls. The proposed method leads to consistent and independent components with a significantly reduced coefficient-of-variation across subjects, suggesting potential to serve as automatic intensity normalization and thus to enhance the power of intensity based statistical analyses. To exemplify this, we show that voxelwise statistical testing on single-subject independent components reveals in particular a widespread sex difference in white matter, which was previously shown using, for example, diffusion tensor imaging but unobservable in the native MRI contrasts. In conclusion, our study shows that single-subject independent component analysis can be applied to routine sequences, thereby enhancing comparability in-between subjects. Unlike quantitative MRI, which requires specific sequences during acquisition, our method is applicable to existing MRI data. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  3. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix

    PubMed Central

    Gao, H; Zhang, T; Wu, Y; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-01-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent ‘super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle. PMID:24984606

  4. Quantitative analysis of multiple components based on liquid chromatography with mass spectrometry in full scan mode.

    PubMed

    Xu, Min Li; Li, Bao Qiong; Wang, Xue; Chen, Jing; Zhai, Hong Lin

    2016-08-01

    Although liquid chromatography with mass spectrometry in full scan mode can obtain all the signals simultaneously in a large range and low cost, it is rarely used in quantitative analysis due to several problems such as chromatographic drifts and peak overlap. In this paper, we propose a Tchebichef moment method for the simultaneous quantitative analysis of three active compounds in Qingrejiedu oral liquid based on three-dimensional spectra in full scan mode of liquid chromatography with mass spectrometry. After the Tchebichef moments were calculated directly from the spectra, the quantitative linear models for three active compounds were established by stepwise regression. All the correlation coefficients were more than 0.9978. The limits of detection and limits of quantitation were less than 0.11 and 0.49 μg/mL, respectively. The intra- and interday precisions were less than 6.54 and 9.47%, while the recovery ranged from 102.56 to 112.15%. Owing to the advantages of multi-resolution and inherent invariance properties, Tchebichef moments could provide favorable results even in the situation of peaks shifting and overlapping, unknown interferences and noise signals, so it could be applied to the analysis of three-dimensional spectra in full scan mode of liquid chromatography with mass spectrometry.

  5. Convolution-based one and two component FRAP analysis: theory and application.

    PubMed

    Tannert, Astrid; Tannert, Sebastian; Burgold, Steffen; Schaefer, Michael

    2009-06-01

    The method of fluorescence redistribution after photobleaching (FRAP) is increasingly receiving interest in biological applications as it is nowadays used not only to determine mobility parameters per se, but to investigate dynamic changes in the concentration or distribution of diffusing molecules. Here, we develop a new simple convolution-based approach to analyze FRAP data using the whole image information. This method does not require information about the timing and localization of the bleaching event but uses the first image acquired directly after photobleaching to calculate the intensity distributions, instead. Changes in pools of molecules with different velocities, which are monitored by applying repetitive FRAP experiments within a single cell, can be analyzed by means of a global model by assuming two global diffusion coefficients with changing portions. We validate the approach by simulation and show that translocation of the YFP-fused PH-domain of phospholipase Cdelta1 can be quantitatively monitored by FRAP analysis in a time-resolved manner. The new FRAP data analysis procedure may be applied to investigate signal transduction pathways using biosensors that change their mobility. An altered mobility in response to the activation of signaling cascades may result either from an altered size of the biosensor, e.g. due to multimerization processes or from translocation of the sensor to an environment with different viscosity.

  6. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    NASA Astrophysics Data System (ADS)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  7. Fuzzy Clusterwise Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Desarbo, Wayne S.; Takane, Yoshio

    2007-01-01

    Generalized Structured Component Analysis (GSCA) was recently introduced by Hwang and Takane (2004) as a component-based approach to path analysis with latent variables. The parameters of GSCA are estimated by pooling data across respondents under the implicit assumption that they all come from a single, homogenous group. However, as has been…

  8. Analysis of the mineral acid-base components of acid-neutralizing capacity in Adirondack Lakes

    NASA Astrophysics Data System (ADS)

    Munson, R. K.; Gherini, S. A.

    1993-04-01

    Mineral acids and bases influence pH largely through their effects on acid-neutralizing capacity (ANC). This influence becomes particularly significant as ANC approaches zero. Analysis of data collected by the Adirondack Lakes Survey Corporation (ALSC) from 1469 lakes throughout the Adirondack region indicates that variations in ANC in these lakes correlate well with base cation concentrations (CB), but not with the sum of mineral acid anion concentrations (CA). This is because (CA) is relatively constant across the Adirondacks, whereas CB varies widely. Processes that supply base cations to solution are ion-specific. Sodium and silica concentrations are well correlated, indicating a common source, mineral weathering. Calcium and magnesium also covary but do not correlate well with silica. This indicates that ion exchange is a significant source of these cations in the absence of carbonate minerals. Iron and manganese concentrations are elevated in the lower waters of some lakes due to reducing conditions. This leads to an ephemeral increase in CB and ANC. When the lakes mix and oxic conditions are restored, these ions largely precipitate from solution. Sulfate is the dominant mineral acid anion in ALSC lakes. Sulfate concentrations are lowest in seepage lakes, commonly about 40 μeq/L less than in drainage lakes. This is due in part to the longer hydraulic detention time in seepage lakes, which allows slow sulfate reduction reactions more time to decrease lake sulfate concentration. Nitrate typically influences ANC during events such as snowmelt. Chloride concentrations are generally low, except in lakes impacted by road salt.

  9. Design and Analysis of a Novel Six-Component F/T Sensor based on CPM for Passive Compliant Assembly

    NASA Astrophysics Data System (ADS)

    Liang, Qiaokang; Zhang, Dan; Wang, Yaonan; Ge, Yunjian

    2013-10-01

    This paper presents the design and analysis of a six-component Force/Torque (F/T) sensor whose design is based on the mechanism of the Compliant Parallel Mechanism (CPM). The force sensor is used to measure forces along the x-, y-, and z-axis (Fx, Fy and Fz) and moments about the x-, y-, and z-axis (Mx, My and Mz) simultaneously and to provide passive compliance during parts handling and assembly. Particularly, the structural design, the details of the measuring principle and the kinematics are presented. Afterwards, based on the Design of Experiments (DOE) approach provided by the software ANSYS®, a Finite Element Analysis (FEA) is performed. This analysis is performed with the objective of achieving both high sensitivity and isotropy of the sensor. The results of FEA show that the proposed sensor possesses high performance and robustness.

  10. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, H.D.; Ayers, C.W.; Casada, D.A.

    1996-11-26

    A new instrument is described for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician. 4 figs.

  11. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  12. Dissecting the phenotypic components of crop plant growth and drought responses based on high-throughput image analysis.

    PubMed

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-12-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. © 2014 American Society of Plant Biologists. All rights reserved.

  13. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude.

  14. Multi-objective analysis of a component-based representation within an interactive evolutionary design system

    NASA Astrophysics Data System (ADS)

    Machwe, A. T.; Parmee, I. C.

    2007-07-01

    This article describes research relating to a user-centered evolutionary design system that evaluates both engineering and aesthetic aspects of design solutions during early-stage conceptual design. The experimental system comprises several components relating to user interaction, problem representation, evolutionary search and exploration and online learning. The main focus of the article is the evolutionary aspect of the system when using a single quantitative objective function plus subjective judgment of the user. Additionally, the manner in which the user-interaction aspect affects system output is assessed by comparing Pareto frontiers generated with and without user interaction via a multi-objective evolutionary algorithm (MOEA). A solution clustering component is also introduced and it is shown how this can improve the level of support to the designer when dealing with a complex design problem involving multiple objectives. Supporting results are from the application of the system to the design of urban furniture which, in this case, largely relates to seating design.

  15. Adsorption isotherm models for dye removal by cationized starch-based material in a single component system: error analysis.

    PubMed

    Gimbert, Frédéric; Morin-Crini, Nadia; Renault, François; Badot, Pierre-Marie; Crini, Grégorio

    2008-08-30

    This article describes the adsorption of an anionic dye, namely C.I. Acid Blue 25 (AB 25), from aqueous solutions onto a cationized starch-based adsorbent. Temperature was varied to investigate its effect on the adsorption capacity. Equilibrium adsorption isotherms were measured for the single component system and the experimental data were analyzed by using Langmuir, Freundlich, Tempkin, Generalized, Redlich-Peterson, and Toth isotherm equations. Five error functions were used to determine the alternative single component parameters by non-linear regression due to the bias in using the correlation coefficient resulting from linearization. The error analysis showed that, compared with other models, the Langmuir model described best the dye adsorption data. Both linear regression method and non-linear error functions provided the best-fit to experimental data with the Langmuir model.

  16. Discriminant analysis of Chinese patent medicines based on near-infrared spectroscopy and principal component discriminant transformation.

    PubMed

    Xu, Zhihong; Liu, Yan; Li, Xiaoyong; Cai, Wensheng; Shao, Xueguang

    2015-01-01

    Principal component discriminant transformation was applied for discrimination of different Chinese patent medicines based on near-infrared (NIR) spectroscopy. In the method, an optimal set of orthogonal discriminant vectors, which highlight the differences between the NIR spectra of different classes, is designed by maximizing Fisher's discriminant function. Therefore, a model for discriminating a class and the others can be obtained with the tiny differences between the NIR spectra of different classes. Furthermore, because NIR spectra contain a large amount of redundant information, principal component analysis (PCA) is employed to reduce the dimension. On the other hand, continuous wavelet transform (CWT) is taken as the pretreatment method to remove the variant background. For identifying the method, different medicines and the same medicine from different manufactures were studied. The results show that all the models can provide 100% discrimination. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Design and Validation of a Morphing Myoelectric Hand Posture Controller Based on Principal Component Analysis of Human Grasping

    PubMed Central

    Segil, Jacob L.; Weir, Richard F. ff.

    2015-01-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately. PMID:23649286

  18. Design and validation of a morphing myoelectric hand posture controller based on principal component analysis of human grasping.

    PubMed

    Segil, Jacob L; Weir, Richard F ff

    2014-03-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately.

  19. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  20. SYNCSA--R tool for analysis of metacommunities based on functional traits and phylogeny of the community components.

    PubMed

    Debastiani, Vanderlei J; Pillar, Valério D

    2012-08-01

    SYNCSA is an R package for the analysis of metacommunities based on functional traits and phylogeny of the community components. It offers tools to calculate several matrix correlations that express trait-convergence assembly patterns, trait-divergence assembly patterns and phylogenetic signal in functional traits at the species pool level and at the metacommunity level. SYNCSA is a package for the R environment, under a GPL-2 open-source license and freely available on CRAN official web server for R (http://cran.r-project.org). vanderleidebastiani@yahoo.com.br.

  1. Semi-blind independent component analysis of fMRI based on real-time fMRI system.

    PubMed

    Ma, Xinyue; Zhang, Hang; Zhao, Xiaojie; Yao, Li; Long, Zhiying

    2013-05-01

    Real-time functional magnetic resonance imaging (fMRI) is a type of neurofeedback tool that enables researchers to train individuals to actively gain control over their brain activation. Independent component analysis (ICA) based on data-driven model is seldom used in real-time fMRI studies due to large time cost, though it has been very popular to offline analysis of fMRI data. The feasibility of performing real-time ICA (rtICA) processing has been demonstrated by previous study. However, rtICA was only applied to analyze single-slice data rather than full-brain data. In order to improve the performance of rtICA, we proposed semi-blind real-time ICA (sb-rtICA) for our real-time fMRI system by adding regularization of certain estimated time courses using the experiment paradigm information to rtICA. Both simulated and real-time fMRI experiment were conducted to compare the two approaches. Results from simulated and real full-brain fMRI data demonstrate that sb-rtICA outperforms rtICA in robustness, computational time and spatial detection power. Moreover, in contrast to rtICA, the first component estimated by sb-rtICA tends to be the target component in more sliding windows.

  2. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    SciTech Connect

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  3. Experimental Analysis of the Effective Components of Problem-Based Learning

    ERIC Educational Resources Information Center

    Pease, Maria A.; Kuhn, Deanna

    2011-01-01

    Problem-based learning (PBL) is widely endorsed as a desirable learning method, particularly in science. Especially in light of the method's heavy demand on resources, evidence-based practice is called for. Rigorous studies of the method's effectiveness, however, are scarce. In Study 1, college students enrolled in an elementary physics course…

  4. Component Based Electronic Voting Systems

    NASA Astrophysics Data System (ADS)

    Lundin, David

    An electronic voting system may be said to be composed of a number of components, each of which has a number of properties. One of the most attractive effects of this way of thinking is that each component may have an attached in-depth threat analysis and verification strategy. Furthermore, the need to include the full system when making changes to a component is minimised and a model at this level can be turned into a lower-level implementation model where changes can cascade to as few parts of the implementation as possible.

  5. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  6. [Pre-warning model of bacterial foodborne illness based on performance of principal component analysis combined with support vector machine].

    PubMed

    Duan, Hejun; Shao, Bing

    2010-11-01

    Based on the historical data of bacterial foodborne illness, the scoring system applied on the pre-warning model system was established in this study according to the rated harm factors. It could build up effectively the predictive model in the analysis of foodborne illness accidents. Extracting the useful information, the principal component analysis was performed on the normalized raw data to reduce the dimension. The result was split into 70% data randomly for training set into the regression model of support vector machine that was used to predict the remaining 30% . Through reducing the dimensions for selecting the optional PCs, it could optimize the calibration and improve the efficiency. The combining method of principal component analysis (PCA) and support vector machine (SVM) could provide the reliable results in the pre-warning model, especially for the high-dimensional data with the limited sample populations. Furthermore, it could achieve 80% accuracy with the optimized parameters. The pre-warning model of bacterial foodborne illness could give the assessment of the poisoning accidents and provided the scientific theory for reducing the incidence of bacterial of food poisoning.

  7. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  8. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  9. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  10. Adherence to combined Lamivudine + Zidovudine versus individual components: a community-based retrospective medicaid claims analysis.

    PubMed

    Legorreta, A; Yu, A; Chernicoff, H; Gilmore, A; Jordan, J; Rosenzweig, J C

    2005-11-01

    Adherence to a fixed dose combination of dual nucleoside antiretroviral therapy was compared between human immunodeficiency virus (HIV)-infected patients newly started on a fixed dosed combination of lamivudine (3TC) 150 mg/zidovudine (ZDV) 300 mg versus its components taken as separate pills. Medicaid pharmacy claims data were used for analyses. To examine the association between treatment group and medication adherence, three types of multivariate regressions were employed. In addition, all regressions were conducted for the whole population using data from 1995 to 2001 as well as a subpopulation, which excluded data prior to September 1997. Model covariates included patient characteristics, healthcare utilization, and non-study antiretroviral therapy use. The likelihood of > or =95% adherence among patients on combination therapy was three times greater than patients taking 3TC and ZDV in separate pills. Also, combination therapy patients had on average 1.4 fewer adherence failures per year of follow-up and nearly double the time to adherence failure compared to the separate pills group. Consistency among study results suggests that fixed dose combination therapies such as lamivudine (3TC) 150 mg/ zidovudine (ZDV) 300 mg should be considered when prescribing HIV treatment that includes an appropriate dual nucleoside.

  11. EEG/fMRI fusion based on independent component analysis: integration of data-driven and model-driven methods.

    PubMed

    Lei, Xu; Valdes-Sosa, Pedro A; Yao, Dezhong

    2012-09-01

    Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) provide complementary noninvasive information of brain activity, and EEG/fMRI fusion can achieve higher spatiotemporal resolution than each modality separately. This focuses on independent component analysis (ICA)-based EEG/fMRI fusion. In order to appreciate the issues, we first describe the potential and limitations of the developed fusion approaches: fMRI-constrained EEG imaging, EEG-informed fMRI analysis, and symmetric fusion. We then outline some newly developed hybrid fusion techniques using ICA and the combination of data-/model-driven methods, with special mention of the spatiotemporal EEG/fMRI fusion (STEFF). Finally, we discuss the current trend in methodological development and the existing limitations for extrapolating neural dynamics.

  12. Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.

    PubMed

    Siuly, Siuly; Li, Yan

    2015-04-01

    The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.

  13. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  14. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  15. Bearing fault recognition method based on neighbourhood component analysis and coupled hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Hongchao; Yuan, Haodong

    2016-01-01

    Due to the important role rolling element bearings play in rotating machines, condition monitoring and fault diagnosis system should be established to avoid abrupt breakage during operation. Various features from time, frequency and time-frequency domain are usually used for bearing or machinery condition monitoring. In this study, NCA-based feature extraction (FE) approach is proposed to reduce the dimensionality of original feature set and avoid the "curse of dimensionality". Furthermore, coupled hidden Markov model (CHMM) based on multichannel data acquisition is applied to diagnose bearing or machinery fault. Two case studies are presented to validate the proposed approach both in bearing fault diagnosis and fault severity classification. The experiment results show that the proposed NCA-CHMM can remove redundant information, fuse data from different channels and improve the diagnosis results.

  16. Integration of Multiple Components in Polystyrene-based Microfluidic Devices Part 2: Cellular Analysis

    PubMed Central

    Anderson, Kari B.; Halpin, Stephen T.; Johnson, Alicia S.; Martin, R. Scott; Spence, Dana M.

    2012-01-01

    In Part II of this series describing the use of polystyrene (PS) devices for microfluidic-based cellular assays, various cellular types and detection strategies are employed to determine three fundamental assays often associated with cells. Specifically, using either integrated electrochemical sensing or optical measurements with a standard multi-well plate reader, cellular uptake, production, or release of important cellular analytes are determined on a PS-based device. One experiment involved the fluorescence measurement of nitric oxide (NO) produced within an endothelial cell line following stimulation with ATP. The result was a four-fold increase in NO production (as compared to a control), with this receptor-based mechanism of NO production verifying the maintenance of cell receptors following immobilization onto the PS substrate. The ability to monitor cellular uptake was also demonstrated by optical determination of Ca2+ into endothelial cells following stimulation with the Ca2+ ionophore A20317. The result was a significant increase (42%) in the calcium uptake in the presence of the ionophore, as compared to a control (17%) (p < 0.05). Finally, the release of catecholamines from a dopaminergic cell line (PC 12 cells) was electrochemically monitored, with the electrodes being embedded into the PS-based device. The PC 12 cells had better adherence on the PS devices, as compared to use of PDMS. Potassium-stimulation resulted in the release of 114 ± 11 µM catecholamines, a significant increase (p < 0.05) over the release from cells that had been exposed to an inhibitor (reserpine, 20 ± 2 µM of catecholamines). The ability to successfully measure multiple analytes, generated in different means from various cells under investigation, suggests that PS may be a useful material for microfluidic device fabrication, especially considering the enhanced cell adhesion to PS, its enhanced rigidity/amenability to automation, and its ability to enable a wider range of

  17. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  18. Vibration-based damage detection in an aircraft wing scaled model using principal component analysis and pattern recognition

    NASA Astrophysics Data System (ADS)

    Trendafilova, I.; Cartmell, M. P.; Ostachowicz, W.

    2008-06-01

    This study deals with vibration-based fault detection in structures and suggests a viable methodology based on principal component analysis (PCA) and a simple pattern recognition (PR) method. The frequency response functions (FRFs) of the healthy and the damaged structure are used as initial data. A PR procedure based on the nearest neighbour principle is applied to recognise between the categories of the damaged and the healthy wing data. A modified PCA method is suggested here, which not only reduces the dimensionality of the FRFs but in addition makes the PCA transformed data from the two categories more differentiable. It is applied to selected frequency bands of FRFs which permits the reduction of the PCA transformed FRFs to two new variables, which are used as damage features. In this study, the methodology is developed and demonstrated using the vibration response of a scaled aircraft wing simulated by a finite element (FE) model. The suggested damage detection methodology is based purely on the analysis of the vibration response of the structure. This makes it quite generic and permits its potential development and application for measured vibration data from real aircraft wings as well as for other real and complex structures.

  19. Hybrid dimensionality reduction method based on support vector machine and independent component analysis.

    PubMed

    Moon, Sangwoo; Qi, Hairong

    2012-05-01

    This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.

  20. IVUS-based FSI models for human coronary plaque progression study: components, correlation and predictive analysis.

    PubMed

    Wang, Liang; Wu, Zheyang; Yang, Chun; Zheng, Jie; Bach, Richard; Muccigrosso, David; Billiar, Kristen; Maehara, Akiko; Mintz, Gary S; Tang, Dalin

    2015-01-01

    Atherosclerotic plaque progression is believed to be associated with mechanical stress conditions. Patient follow-up in vivo intravascular ultrasound coronary plaque data were acquired to construct fluid-structure interaction (FSI) models with cyclic bending to obtain flow wall shear stress (WSS), plaque wall stress (PWS) and strain (PWSn) data and investigate correlations between plaque progression measured by wall thickness increase (WTI), cap thickness increase (CTI), lipid depth increase (LDI) and risk factors including wall thickness (WT), WSS, PWS, and PWSn. Quarter average values (n = 178-1016) of morphological and mechanical factors from all slices were obtained for analysis. A predictive method was introduced to assess prediction accuracy of risk factors and identify the optimal predictor(s) for plaque progression. A combination of WT and PWS was identified as the best predictor for plaque progression measured by WTI. Plaque WT had best overall correlation with WTI (r = -0.7363, p < 1E-10), cap thickness (r = 0.4541, p < 1E-10), CTI (r = -0.4217, p < 1E-8), LD (r = 0.4160, p < 1E-10), and LDI (r = -0.4491, p < 1E-10), followed by PWS (with WTI: (r = -0.3208, p < 1E-10); cap thickness: (r = 0.4541, p < 1E-10); CTI: (r = -0.1719, p = 0.0190); LD: (r = -0.2206, p < 1E-10); LDI: r = 0.1775, p < 0.0001). WSS had mixed correlation results.

  1. Quantitative analysis of multi-component complex oil spills based on the least-squares support vector regression

    NASA Astrophysics Data System (ADS)

    Tan, Ailing; Zhao, Yong; Wang, Siyuan

    2016-10-01

    Quantitative analysis of the simulated complex oil spills was researched based on PSO-LS-SVR method. Forty simulated mixture oil spills samples were made with different concentration proportions of gasoline, diesel and kerosene oil, and their near infrared spectra were collected. The parameters of least squares support vector machine were optimized by particle swarm optimization algorithm. The optimal concentration quantitative models of three-component oil spills were established. The best regularization parameter C and kernel parameter σ of gasoline, diesel and kerosene model were 48.1418 and 0.1067, 53.2820 and 0.1095, 59.1689 and 0.1000 respectively. The decision coefficient R2 of the prediction model were 0.9983, 0.9907 and 0.9942 respectively. RMSEP values were 0.0753, 0.1539 and 0.0789 respectively. For gasoline, diesel fuel and kerosene oil models, the mean value and variance value of predict absolute error were -0.0176±0.0636 μL/mL, -0.0084+/-0.1941 μL/mL, and 0.00338+/-0.0726 μL/mL respectively. The results showed that each component's concentration of the oil spills samples could be detected by the NIR technology combined with PSO-LS-SVR regression method, the predict results were accurate and reliable, thus this method can provide effective means for the quantitative detection and analysis of complex marine oil spills.

  2. A fuzzy-based growth model with principle component analysis selection for carpal bone-age assessment.

    PubMed

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Tiu, Chui-Mei

    2010-06-01

    There are two well-known methods to assess bone age, the Greulich-Pyle method and the Tanner-Whitehouse method, which both utilize the hand radiogram to make bone-age assessment to assist medical doctors to identify the growth status of children. Basically, the morphology of bones could be evaluated to quantitatively describe the maturity. The study extracted the morphology of carpal bones and applied the fuzzy theory with principle component analysis to estimate the maturity of skeleton. Five geometric features of the carpals were extracted including the bone area, the area ratio, and the bone contour of the carpals. In order to analyze these features, the principle component analysis and the statistical correlation combined with three different types of procedure were used to construct a growth model of carpals. Eventually, the results of the three types of procedure with fuzzy rules can construct a bone-age assessment system to identify the maturity of children. The study shows that the proposed model based on fuzzy rule has an accuracy rate above 89% in Type-I and II, and above 87% in Type-III within a tolerance of 1.5 years.

  3. Failure Analysis of Ceramic Components

    SciTech Connect

    B.W. Morris

    2000-06-29

    Ceramics are being considered for a wide range of structural applications due to their low density and their ability to retain strength at high temperatures. The inherent brittleness of monolithic ceramics requires a departure from the deterministic design philosophy utilized to analyze metallic structural components. The design program ''Ceramic Analysis and Reliability Evaluation of Structures Life'' (CARES/LIFE) developed by NASA Lewis Research Center uses a probabilistic approach to predict the reliability of monolithic components under operational loading. The objective of this study was to develop an understanding of the theories used by CARES/LIFE to predict the reliability of ceramic components and to assess the ability of CARES/LIFE to accurately predict the fast fracture behavior of monolithic ceramic components. A finite element analysis was performed to determine the temperature and stress distribution of a silicon carbide O-ring under diametral compression. The results of the finite element analysis were supplied as input into CARES/LIFE to determine the fast fracture reliability of the O-ring. Statistical material strength parameters were calculated from four-point flexure bar test data. The predicted reliability showed excellent correlation with O-ring compression test data indicating that the CARES/LIFE program can be used to predict the reliability of ceramic components subjected to complicated stress states using material properties determined from simple uniaxial tensile tests.

  4. IVUS-Based FSI Models for Human Coronary Plaque Progression Study: Components, Correlation and Predictive Analysis

    PubMed Central

    Wang, Liang; Wu, Zheyang; Yang, Chun; Zheng, Jie; Bach, Richard; Muccigrosso, David; Billiar, Kristen; Maehara, Akiko; Mintz, Gary S.; Tang, Dalin

    2014-01-01

    Atherosclerotic plaque progression is believed to be associated with mechanical stress conditions. Patient follow-up in vivo intravascular ultrasound coronary plaque data were acquired to construct fluid-structure interaction (FSI) models with cyclic bending to obtain flow wall shear stress (WSS), plaque wall stress (PWS) and strain (PWSn) data and investigate correlations between plaque progression measured by wall thickness increase (WTI), cap thickness increase (CTI), lipid depth increase (LDI) and risk factors including wall thickness (WT), WSS, PWS, and PWSn. Quarter average values (n=178–1016) of morphological and mechanical factors from all slices were obtained for analysis. A predictive method was introduced to assess prediction accuracy of risk factors and identify the optimal predictor(s) for plaque progression. A combination of wall thickness and plaque wall stress was identified as the best predictor for plaque progression measured by WTI. Plaque wall thickness had best overall correlation with WTI (r=−0.7363, p<1e-10), cap thickness (r=0.4541, p<1e-10), CTI (r = − 0.4217, p<1e-8), LD (r=0.4160, p<1e-10), and LDI (r= −0.4491, p<1e-10), followed by plaque wall stress (with WTI: (r = − 0.3208, p<1e-10); cap thickness: (r=0.4541, p<1e-10); CTI: (r = − 0.1719, p=0.0190); LD: (r= − 0.2206, p<1e-10); LDI: r=0.1775, p<0.0001). Wall shear stress had mixed correlation results. PMID:25245219

  5. Parametric Analysis to Study the Influence of Aerogel-Based Renders’ Components on Thermal and Mechanical Performance

    PubMed Central

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-01-01

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study’s objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect. PMID:28773460

  6. Parametric Analysis to Study the Influence of Aerogel-Based Renders' Components on Thermal and Mechanical Performance.

    PubMed

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-05-04

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study's objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect.

  7. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    SciTech Connect

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-07

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C{sub α} atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  8. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  9. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  10. Daily PM2.5 concentration prediction based on principal component analysis and LSSVM optimized by cuckoo search algorithm.

    PubMed

    Sun, Wei; Sun, Jingyi

    2017-03-01

    Increased attention has been paid to PM2.5 pollution in China. Due to its detrimental effects on environment and health, it is important to establish a PM2.5 concentration forecasting model with high precision for its monitoring and controlling. This paper presents a novel hybrid model based on principal component analysis (PCA) and least squares support vector machine (LSSVM) optimized by cuckoo search (CS). First PCA is adopted to extract original features and reduce dimension for input selection. Then LSSVM is applied to predict the daily PM2.5 concentration. The parameters in LSSVM are fine-tuned by CS to improve its generalization. An experiment study reveals that the proposed approach outperforms a single LSSVM model with default parameters and a general regression neural network (GRNN) model in PM2.5 concentration prediction. Therefore the established model presents the potential to be applied to air quality forecasting systems.

  11. Spherical mesh adaptive direct search for separating quasi-uncorrelated sources by range-based independent component analysis.

    PubMed

    Selvan, S Easter; Borckmans, Pierre B; Chattopadhyay, A; Absil, P-A

    2013-09-01

    It is seemingly paradoxical to the classical definition of the independent component analysis (ICA), that in reality, the true sources are often not strictly uncorrelated. With this in mind, this letter concerns a framework to extract quasi-uncorrelated sources with finite supports by optimizing a range-based contrast function under unit-norm constraints (to handle the inherent scaling indeterminacy of ICA) but without orthogonality constraints. Albeit the appealing contrast properties of the range-based function (e.g., the absence of mixing local optima), the function is not differentiable everywhere. Unfortunately, there is a dearth of literature on derivative-free optimizers that effectively handle such a nonsmooth yet promising contrast function. This is the compelling reason for the design of a nonsmooth optimization algorithm on a manifold of matrices having unit-norm columns with the following objectives: to ascertain convergence to a Clarke stationary point of the contrast function and adhere to the necessary unit-norm constraints more naturally. The proposed nonsmooth optimization algorithm crucially relies on the design and analysis of an extension of the mesh adaptive direct search (MADS) method to handle locally Lipschitz objective functions defined on the sphere. The applicability of the algorithm in the ICA domain is demonstrated with simulations involving natural, face, aerial, and texture images.

  12. Principal component analysis of molecularly based signals from infant formula contaminations using LC-MS and NMR in foodomics.

    PubMed

    Inoue, Koichi; Tanada, Chihiro; Hosoya, Takahiro; Yoshida, Shuhei; Akiba, Takashi; Min, Jun Zhe; Todoroki, Kenichiro; Yamano, Yutaka; Kumazawa, Shigenori; Toyo'oka, Toshimasa

    2016-08-01

    The challenge in developing analytical assessment of unexpected excess contaminations in infant formula has been the most significant project to address the widespread issue of food safety and security. Foodomics based on metabolomics techniques provides powerful tools for the detection of tampering cases with intentional contaminations. However, the safety and risk assessments of infant formula to reveal not only the targeted presence of toxic chemicals, but also molecular changes involving unexpected contaminations, have not been reported. In this study, a huge amount of raw molecularly based signals from infant formula was analysed using reversed phase and hydrophilic interaction chromatography with time-of-flight MS (LC-MS) and (1) H nuclear magnetic resonance (NMR) and then processed by a principal component analysis (PCA). PCA plots visualised signature trends in the complex signal-data batches from each excess contamination of detectable chemicals by LC-MS and NMR. These trends in the different batches from a portion of excess chemical contaminations such as pesticides, melamine and heavy metals and out-of-date products can be visualised from spectrally discriminated infant formula samples. PCA plots provide possible attempts to maximise the covariance between the stable lot-to-lot uniformity and excess exogenous contaminations and/or degradation to discriminate against the molecularly based signals from infant formulas. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  13. Simultaneous multi-wavelength phase-shifting interferometry based on principal component analysis with a color CMOS

    NASA Astrophysics Data System (ADS)

    Fan, Jingping; Lu, Xiaoxu; Xu, Xiaofei; Zhong, Liyun

    2016-05-01

    From a sequence of simultaneous multi-wavelength phase-shifting interferograms (SMWPSIs) recorded by a color CMOS, a principal component analysis (PCA) based multi-wavelength interferometry (MWI) is proposed. First, a sequence of SMWPSIs with unknown phase shifts are recorded with a single-chip color CMOS camera. Subsequently, the wrapped phases of single-wavelength are retrieved with the PCA algorithm. Finally, the unambiguous phase of the extended synthetic wavelength is achieved by the subtraction between the wrapped phases of single-wavelength. In addition, to eliminate the additional phase introduced by the microscope and intensity crosstalk among three-color channels, a two-step phase compensation method with and without the measured object in the experimental system is employed. Compared with conventional single-wavelength phase-shifting interferometry, due to no requirements for phase shifts calibration and the phase unwrapping operation, the actual unambiguous phase of the measured object can be achieved with the proposed PCA-based MWI method conveniently. Both numerical simulations and experimental results demonstrate that the proposed PCA-based MWI method can enlarge not only the measuring range, but also no amplification of noise level.

  14. Simultaneous fingerprint, quantitative analysis and anti-oxidative based screening of components in Rhizoma Smilacis Glabrae using liquid chromatography coupled with Charged Aerosol and Coulometric array Detection.

    PubMed

    Yang, Guang; Zhao, Xin; Wen, Jun; Zhou, Tingting; Fan, Guorong

    2017-04-01

    An analytical approach including fingerprint, quantitative analysis and rapid screening of anti-oxidative components was established and successfully applied for the comprehensive quality control of Rhizoma Smilacis Glabrae (RSG), a well-known Traditional Chinese Medicine with the homology of medicine and food. Thirteen components were tentatively identified based on their retention behavior, UV absorption and MS fragmentation patterns. Chemometric analysis based on coulmetric array data was performed to evaluate the similarity and variation between fifteen batches. Eight discriminating components were quantified using single-compound calibration. The unit responses of those components in coulmetric array detection were calculated and compared with those of several compounds reported to possess antioxidant activity, and four of them were tentatively identified as main contributors to the total anti-oxidative activity. The main advantage of the proposed approach was that it realized simultaneous fingerprint, quantitative analysis and screening of anti-oxidative components, providing comprehensive information for quality assessment of RSG.

  15. A general ocean color atmospheric correction scheme based on principal components analysis: Part II. Level 4 merging capabilities

    NASA Astrophysics Data System (ADS)

    Gross-Colzy, Lydwine; Colzy, Stéphane; Frouin, Robert; Henry, Patrice

    2007-09-01

    The Ocean Color Estimation by principal component ANalysis (OCEAN) algorithm performs atmospheric correction of satellite ocean-color imagery in the presence of various aerosol contents and types, including absorbing mixtures, and for the full range of water properties (Case 1 and Case 2 waters), retrieving diffuse water reflectance with good theoretical accuracy. It is easy to implement and has several advantages for operational processing lines: (1) It has de-noising abilities, for it is based on principal component analysis and neural networks, (2) it is able to perform atmospheric correction through cirrus and thin clouds, (3) it is able to retrieve water reflectance in the presence of Sun glint until a glint reflectance of 0.2, and more importantly, (4) it is less sensitive to absolute radiometric calibration and directionality than classical ocean-color algorithms. This allows multi-sensor merging (denoted hereafter Level 4 synthesis). These abilities may improve dramatically the daily spatial coverage of ocean color products. In the companion paper (Part I), the theoretical performance of OCEAN in situations of both Case 1 and Case 2 waters is presented for various multispectral radiometers (i.e., POLDER, SeaWiFS, MODIS, MERIS). In this paper (Part II), the focus is made on OCEAN de-noising and merging properties. The ability of the algorithm to work in situations of Sun glint and cirrus/thin clouds is illustrated using MERIS imagery. Multi-directional merging is demonstrated using POLDER imagery (daily and temporal merging), and multi-sensor merging using SeaWiFS and MODIS imagery (daily merging). The resulting products do not show directional artifacts.

  16. High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang

    2016-07-01

    Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.

  17. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  18. Multivariate Principal Component Analysis and Case-Based Reasoning for monitoring, fault detection and diagnosis in a WWTP.

    PubMed

    Ruiz, Magda; Sin, Gürkan; Berjaga, Xavier; Colprim, Jesús; Puig, Sebastià; Colomer, Joan

    2011-01-01

    The main idea of this paper is to develop a methodology for process monitoring, fault detection and predictive diagnosis of a WasteWater Treatment Plant (WWTP). To achieve this goal, a combination of Multiway Principal Component Analysis (MPCA) and Case-Based Reasoning (CBR) is proposed. First, MPCA is used to reduce the multi-dimensional nature of online process data, which summarises most of the variance of the process data in a few (new) variables. Next, the outputs of MPCA (t-scores, Q-statistic) are provided as inputs (descriptors) to the CBR method, which is employed to identify problems and propose appropriate solutions (hence diagnosis) based on previously stored cases. The methodology is evaluated on a pilot-scale SBR performing nitrogen, phosphorus and COD removal and to help to diagnose abnormal situations in the process operation. Finally, it is believed that the methodology is a promising tool for automatic diagnosis and real-time warning, which can be used for daily management of plant operation.

  19. Monitoring of high-power fiber laser welding based on principal component analysis of a molten pool configuration

    NASA Astrophysics Data System (ADS)

    Xiangdong, Gao; Qian, Wen

    2013-12-01

    There exists plenty of welding quality information on a molten pool during high-power fiber laser welding. An approach for monitoring the high-power fiber laser welding status based on the principal component analysis (PCA) of a molten pool configuration is investigated. An infrared-sensitive high-speed camera was used to capture the molten pool images during laser butt-joint welding of Type 304 austenitic stainless steel plates with a high-power (10 kW) continuous wave fiber laser. In order to study the relationship between the molten pool configuration and the welding status, a new method based on PCA is proposed to analyze the welding stability by comparing the situation when the laser beam spot moves along, and when it deviates from the weld seam. Image processing techniques were applied to process the molten pool images and extract five characteristic parameters. Moreover, the PCA method was used to extract a composite indicator which is the linear combination of the five original characteristics to analyze the different status during welding. Experimental results showed that the extracted composite indicator had a close relationship with the actual welding results and it could be used to evaluate the status of the high-power fiber laser welding, providing a theoretical basis for the monitoring of laser welding quality.

  20. T-wave morphology parameters based on principal component analysis reproducibility and dependence on T-offset position.

    PubMed

    Extramiana, Fabrice; Haggui, Abdeddayem; Maison-Blanche, Pierre; Dubois, Rémi; Takatsuki, Seiji; Beaufils, Philippe; Leenhardt, Antoine

    2007-10-01

    T-wave morphology parameters based on principal component analysis (PCA) are candidate to better understand the relation between QT prolongation and torsades de pointes. We aimed to assess the repeatability and to determine the influence of T-end position on PCA parameters. Digital ECGs recorded from 30 subjects were used to assess short term (5 minutes), circadian and long-term (28 days) repeatability of PCA parameters. The T-end cursor position was moved backward and forward (+/- 8 ms) from its optimal position. We calculated QRS-T angle, PCA ratio, and T-wave residuum (TWR). At long-term evaluation, coefficients of variation were 11.3 +/- 9.9%, 11.7 +/- 7.1%, and 23.0 +/- 22.0% for the QRS-T angle, PCA ratio, TWR, respectively. After moving the T-end cursor, repeatability was 0.42 +/- 0.2%, 1.00 +/- 1.04%, 4.0 +/- 4.2% for the same PCA parameters. T-wave morphology parameters based on PCA are reproducible with the exception of TWR and QRS-T angle. In addition, PCA is robust, showing only little dependence on T-end cursor position. These data should be taken into account for safety pharmacology trials.

  1. Independent Component Analysis-Support Vector Machine-Based Computer-Aided Diagnosis System for Alzheimer's with Visual Support.

    PubMed

    Khedher, Laila; Illán, Ignacio A; Górriz, Juan M; Ramírez, Javier; Brahim, Abdelbasset; Meyer-Baese, Anke

    2017-05-01

    Computer-aided diagnosis (CAD) systems constitute a powerful tool for early diagnosis of Alzheimer's disease (AD), but limitations on interpretability and performance exist. In this work, a fully automatic CAD system based on supervised learning methods is proposed to be applied on segmented brain magnetic resonance imaging (MRI) from Alzheimer's disease neuroimaging initiative (ADNI) participants for automatic classification. The proposed CAD system possesses two relevant characteristics: optimal performance and visual support for decision making. The CAD is built in two stages: a first feature extraction based on independent component analysis (ICA) on class mean images and, secondly, a support vector machine (SVM) training and classification. The obtained features for classification offer a full graphical representation of the images, giving an understandable logic in the CAD output, that can increase confidence in the CAD support. The proposed method yields classification results up to 89% of accuracy (with 92% of sensitivity and 86% of specificity) for normal controls (NC) and AD patients, 79% of accuracy (with 82% of sensitivity and 76% of specificity) for NC and mild cognitive impairment (MCI), and 85% of accuracy (with 85% of sensitivity and 86% of specificity) for MCI and AD patients.

  2. Statistically based uncertainty analysis for ranking of component importance in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor

    SciTech Connect

    Wilson, G.E.

    1992-01-01

    The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented.

  3. Quantitative profiling of polar metabolites in herbal medicine injections for multivariate statistical evaluation based on independence principal component analysis.

    PubMed

    Jiang, Miaomiao; Jiao, Yujiao; Wang, Yuefei; Xu, Lei; Wang, Meng; Zhao, Buchang; Jia, Lifu; Pan, Hao; Zhu, Yan; Gao, Xiumei

    2014-01-01

    Botanical primary metabolites extensively exist in herbal medicine injections (HMIs), but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR) profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI), was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA), the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs). IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids) of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs.

  4. Improvements in Regression-Based Air Temperature Estimation Incorporating Nighttime Light Data, Principal Component Analysis and Composite Sinusoidal Coefficients

    NASA Astrophysics Data System (ADS)

    Quan, J.

    2016-12-01

    Near surface air temperature (Ta) is one of the most critical variables in climatology, hydrology, epidemiology and environmental health. In-situ measurements are not efficient for characterizing spatially heterogeneous Ta, while remote sensing is a powerful tool to break this limitation. This study proposes a mapping framework for daily mean Ta using an enhanced empirical regression method based on remote sensing data. It differs from previous studies in three aspects. First, nighttime light data is introduced as a predictor (besides seven most Ta-relevant variables, i.e., land surface temperature, normalized difference vegetation index, impervious surface area, black sky albedo, normalized difference water index, elevation, and duration of daylight) considering the urbanization-induced Ta increase over a large area. Second, independent components are extracted using principal component analysis considering the correlations among the above predictors. Third, a composite sinusoidal coefficient regression is developed considering the dynamic Ta-predictor relationship. The derived coefficients are then applied back to the spatially collocated predictors to reconstruct spatio-temporal Ta. This method is performed with 333 weather stations in China during the 2001-2012 period. Evaluation shows overall mean error of -0.01 K, root mean square error (RMSE) of 2.53 K, correlation coefficient (R2) of 0.96, and average uncertainty of 0.21 K. Model inter-comparison shows that this method outperforms six additional empirical regressions that have not incorporated nighttime light data or considered multi-predictor correlations or coefficient dynamics (by 0.18-2.60 K in RMSE and 0.00-0.15 in R2).

  5. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    PubMed Central

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  6. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    PubMed

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2014-12-15

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities.

  7. Inductive robust principal component analysis.

    PubMed

    Bao, Bing-Kun; Liu, Guangcan; Xu, Changsheng; Yan, Shuicheng

    2012-08-01

    In this paper we address the error correction problem that is to uncover the low-dimensional subspace structure from high-dimensional observations, which are possibly corrupted by errors. When the errors are of Gaussian distribution, Principal Component Analysis (PCA) can find the optimal (in terms of least-square-error) low-rank approximation to highdimensional data. However, the canonical PCA method is known to be extremely fragile to the presence of gross corruptions. Recently, Wright et al. established a so-called Robust Principal Component Analysis (RPCA) method, which can well handle grossly corrupted data [14]. However, RPCA is a transductive method and does not handle well the new samples which are not involved in the training procedure. Given a new datum, RPCA essentially needs to recalculate over all the data, resulting in high computational cost. So, RPCA is inappropriate for the applications that require fast online computation. To overcome this limitation, in this paper we propose an Inductive Robust Principal Component Analysis (IRPCA) method. Given a set of training data, unlike RPCA that targets on recovering the original data matrix, IRPCA aims at learning the underlying projection matrix, which can be used to efficiently remove the possible corruptions in any datum. The learning is done by solving a nuclear norm regularized minimization problem, which is convex and can be solved in polynomial time. Extensive experiments on a benchmark human face dataset and two video surveillance datasets show that IRPCA can not only be robust to gross corruptions, but also handle well the new data in an efficient way.

  8. [Evaluation of germplasm resource of Ophiopogon japonicus in Sichuan basin based on principal component and cluster analysis].

    PubMed

    Liu, Jiang; Chen, Xingfu; Liu, Sha; Yang, Wenyu; Du, Gang; Liu, Weiguo

    2010-03-01

    To compare and appraise the quality of germplasm resource of Ophiopogon japonicus in Sichuan basin. According to the main contents and yield traits, 24 wild germplasm resources of O. japonicus from different areas of Sichuan basin were comprehensively compared by the SPSS 17.0 software with principal component analysis and cluster analysis. The six samples of Ziyang, Jianyang, Leshan, Yibin, Chongqing, Mianyang, their comprehensive evaluation value of quality were higher than the others, and the sample of Ziyang had the best quality, the sample of Dazhou had the least quality, the results of the cluster analysis to raw data were also shown a similar results as principal component analysis. The wild resources of O. japonicus in Sichuan basin is rich, there are much differences among their quality; the method, through principal component analysis to study the comprehensive evaluation of the O. japonicus quality, is reliability and the results of cluster analysis is also support the conclusions, it could be able to provide a reference to select high O. japonicus quality resources.

  9. Estimating stellar atmospheric parameters, absolute magnitudes and elemental abundances from the LAMOST spectra with Kernel-based principal component analysis

    NASA Astrophysics Data System (ADS)

    Xiang, M.-S.; Liu, X.-W.; Shi, J.-R.; Yuan, H.-B.; Huang, Y.; Luo, A.-L.; Zhang, H.-W.; Zhao, Y.-H.; Zhang, J.-N.; Ren, J.-J.; Chen, B.-Q.; Wang, C.; Li, J.; Huo, Z.-Y.; Zhang, W.; Wang, J.-L.; Zhang, Y.; Hou, Y.-H.; Wang, Y.-F.

    2017-01-01

    Accurate determination of stellar atmospheric parameters and elemental abundances is crucial for Galactic archaeology via large-scale spectroscopic surveys. In this paper, we estimate stellar atmospheric parameters - effective temperature Teff, surface gravity log g and metallicity [Fe/H], absolute magnitudes MV and MKs, α-element to metal (and iron) abundance ratio [α/M] (and [α/Fe]), as well as carbon and nitrogen abundances [C/H] and [N/H] from the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) spectra with a multivariate regression method based on kernel-based principal component analysis, using stars in common with other surveys (Hipparcos, Kepler, Apache Point Observatory Galactic Evolution Experiment) as training data sets. Both internal and external examinations indicate that given a spectral signal-to-noise ratio (SNR) better than 50, our method is capable of delivering stellar parameters with a precision of ˜100 K for Teff, ˜0.1 dex for log g, 0.3-0.4 mag for MV and MKs, 0.1 dex for [Fe/H], [C/H] and [N/H], and better than 0.05 dex for [α/M] ([α/Fe]). The results are satisfactory even for a spectral SNR of 20. The work presents first determinations of [C/H] and [N/H] abundances from a vast data set of LAMOST, and, to our knowledge, the first reported implementation of absolute magnitude estimation directly based on a vast data set of observed spectra. The derived stellar parameters for millions of stars from the LAMOST surveys will be publicly available in the form of value-added catalogues.

  10. Principal components analysis competitive learning.

    PubMed

    López-Rubio, Ezequiel; Ortiz-de-Lazcano-Lobato, Juan Miguel; Muñoz-Pérez, José; Gómez-Ruiz, José Antonio

    2004-11-01

    We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA. Furthermore, every neuron is able to modify its behavior to adapt to the local dimensionality of the input distribution. Hence, our model has a dimensionality estimation capability. The experimental results we present show the dimensionality-reduction capabilities of the model with multisensor images.

  11. [Discrimination of wood biological decay by soft independent modeling of class analogy (SIMCA) pattern recognition based on principal component analysis].

    PubMed

    Yang, Zhong; Jiang, Ze-Hui; Fei, Ben-Hua; Qin, Dao-Chun

    2007-04-01

    Wood, as a biomass materials, tends to be attacked by microorganisms, and its structure could be rapidly destroyed by biological decay. Therefore, it's significant to rapidly and accurately detect or identify biological decay in wood. Recently, extensive research has demonstrated that near infrared spectroscopy (NIR) and soft independent modeling of class analogy (SIMCA) can be used to discriminate or detect a wide variety of food, medicine and agricultural products. The use of NIR coupled with principal component analysis (PCA) and SIMCA pattern recognition to detect wood biological decay was investigated in the present paper. The results showed that NIR spectroscopy coupled with SIMCA pattern recognition could be used to rapidly detect the biological decay in wood. The discrimination accuracy by the SIMCA model based on the training set for the non-decay, white-rot and brown-rot decay samples were 100%, 82. 5% and 100%, respectively; and that for the samples for the test set were 100%, 85% and 100%, respectively. However, some white-rot decay samples were mis-discriminated as brown-rot decay, for which the main reasons might be that the training set does not have enough typical samples, and there's a slight difference between white-rot and brown-rot decay during the early stage of decay.

  12. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process.

  13. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    PubMed

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species

  14. Fast Steerable Principal Component Analysis.

    PubMed

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-03-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.

  15. Functional Generalized Structured Component Analysis.

    PubMed

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  16. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  17. Parametric functional principal component analysis.

    PubMed

    Sang, Peijun; Wang, Liangliang; Cao, Jiguo

    2017-09-01

    Functional principal component analysis (FPCA) is a popular approach in functional data analysis to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). Most existing FPCA approaches use a set of flexible basis functions such as B-spline basis to represent the FPCs, and control the smoothness of the FPCs by adding roughness penalties. However, the flexible representations pose difficulties for users to understand and interpret the FPCs. In this article, we consider a variety of applications of FPCA and find that, in many situations, the shapes of top FPCs are simple enough to be approximated using simple parametric functions. We propose a parametric approach to estimate the top FPCs to enhance their interpretability for users. Our parametric approach can also circumvent the smoothing parameter selecting process in conventional nonparametric FPCA methods. In addition, our simulation study shows that the proposed parametric FPCA is more robust when outlier curves exist. The parametric FPCA method is demonstrated by analyzing several datasets from a variety of applications. © 2017, The International Biometric Society.

  18. Relating Essential Proteins to Drug Side-Effects Using Canonical Component Analysis: A Structure-Based Approach.

    PubMed

    Liu, Tianyun; Altman, Russ B

    2015-07-27

    The molecular mechanism of many drug side-effects is unknown and difficult to predict. Previous methods for explaining side-effects have focused on known drug targets and their pathways. However, low affinity binding to proteins that are not usually considered drug targets may also drive side-effects. In order to assess these alternative targets, we used the 3D structures of 563 essential human proteins systematically to predict binding to 216 drugs. We first benchmarked our affinity predictions with available experimental data. We then combined singular value decomposition and canonical component analysis (SVD-CCA) to predict side-effects based on these novel target profiles. Our method predicts side-effects with good accuracy (average AUC: 0.82 for side effects present in <50% of drug labels). We also noted that side-effect frequency is the most important feature for prediction and can confound efforts at elucidating mechanism; our method allows us to remove the contribution of frequency and isolate novel biological signals. In particular, our analysis produces 2768 triplet associations between 50 essential proteins, 99 drugs, and 77 side-effects. Although experimental validation is difficult because many of our essential proteins do not have validated assays, we nevertheless attempted to validate a subset of these associations using experimental assay data. Our focus on essential proteins allows us to find potential associations that would likely be missed if we used recognized drug targets. Our associations provide novel insights about the molecular mechanisms of drug side-effects and highlight the need for expanded experimental efforts to investigate drug binding to proteins more broadly.

  19. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  20. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  1. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  2. Bayesian robust principal component analysis.

    PubMed

    Ding, Xinghao; He, Lihan; Carin, Lawrence

    2011-12-01

    A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.

  3. [Analysis and comparison of intestinal absorption of components of Gegenqinlian decoction in different combinations based on pharmacokinetic parameters].

    PubMed

    Zhang, Yi-Zhu; An, Rui; Yuan, Jin; Wang, Yue; Gu, Qing-Qing; Wang, Xin-Hong

    2013-10-01

    To analyse and compare the characteristics of the intestinal absorption of puerarin, baicalin, berberine and liquiritin in different combinations of Gegenqinlian decoction based on pharmacokinetic parameters, a sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) method was applied for the quantification of four components in rat's plasma. And pharmacokinetic parameters were determined from the plasma concentration-time data with the DAS software package. The influence of different combinations on pharmacokinetics of four components was studied to analyse and compare the absorption difference of four components, together with the results of the in vitro everted gut model and the rat single pass intestinal perfusion model. The results showed that compared with other combinations, the AUC values of puerarin, baicalin and berberine were increased significantly in Gegenqinlian decoction group, while the AUC value of liquiritin was reduced. Moreover, the absorption of four components was increased significantly supported by the results from the in vitro everted gut model and the rat single pass intestinal perfusion model, which indicated that the Gegenqinlian decoction may promote the absorption of four components and accelerate the metabolism of liquiritin by the cytochrome P450.

  4. Involvement of the anterior cingulate cortex in time-based prospective memory task monitoring: An EEG analysis of brain sources using Independent Component and Measure Projection Analysis.

    PubMed

    Cruz, Gabriela; Burgos, Pablo; Kilborn, Kerry; Evans, Jonathan J

    2017-01-01

    Time-based prospective memory (PM), remembering to do something at a particular moment in the future, is considered to depend upon self-initiated strategic monitoring, involving a retrieval mode (sustained maintenance of the intention) plus target checking (intermittent time checks). The present experiment was designed to explore what brain regions and brain activity are associated with these components of strategic monitoring in time-based PM tasks. 24 participants were asked to reset a clock every four minutes, while performing a foreground ongoing word categorisation task. EEG activity was recorded and data were decomposed into source-resolved activity using Independent Component Analysis. Common brain regions across participants, associated with retrieval mode and target checking, were found using Measure Projection Analysis. Participants decreased their performance on the ongoing task when concurrently performed with the time-based PM task, reflecting an active retrieval mode that relied on withdrawal of limited resources from the ongoing task. Brain activity, with its source in or near the anterior cingulate cortex (ACC), showed changes associated with an active retrieval mode including greater negative ERP deflections, decreased theta synchronization, and increased alpha suppression for events locked to the ongoing task while maintaining a time-based intention. Activity in the ACC was also associated with time-checks and found consistently across participants; however, we did not find an association with time perception processing per se. The involvement of the ACC in both aspects of time-based PM monitoring may be related to different functions that have been attributed to it: strategic control of attention during the retrieval mode (distributing attentional resources between the ongoing task and the time-based task) and anticipatory/decision making processing associated with clock-checks.

  5. Involvement of the anterior cingulate cortex in time-based prospective memory task monitoring: An EEG analysis of brain sources using Independent Component and Measure Projection Analysis

    PubMed Central

    Burgos, Pablo; Kilborn, Kerry; Evans, Jonathan J.

    2017-01-01

    Objective Time-based prospective memory (PM), remembering to do something at a particular moment in the future, is considered to depend upon self-initiated strategic monitoring, involving a retrieval mode (sustained maintenance of the intention) plus target checking (intermittent time checks). The present experiment was designed to explore what brain regions and brain activity are associated with these components of strategic monitoring in time-based PM tasks. Method 24 participants were asked to reset a clock every four minutes, while performing a foreground ongoing word categorisation task. EEG activity was recorded and data were decomposed into source-resolved activity using Independent Component Analysis. Common brain regions across participants, associated with retrieval mode and target checking, were found using Measure Projection Analysis. Results Participants decreased their performance on the ongoing task when concurrently performed with the time-based PM task, reflecting an active retrieval mode that relied on withdrawal of limited resources from the ongoing task. Brain activity, with its source in or near the anterior cingulate cortex (ACC), showed changes associated with an active retrieval mode including greater negative ERP deflections, decreased theta synchronization, and increased alpha suppression for events locked to the ongoing task while maintaining a time-based intention. Activity in the ACC was also associated with time-checks and found consistently across participants; however, we did not find an association with time perception processing per se. Conclusion The involvement of the ACC in both aspects of time-based PM monitoring may be related to different functions that have been attributed to it: strategic control of attention during the retrieval mode (distributing attentional resources between the ongoing task and the time-based task) and anticipatory/decision making processing associated with clock-checks. PMID:28863146

  6. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  7. Cluster and principal component analysis based on SSR markers of Amomum tsao-ko in Jinping County of Yunnan Province

    NASA Astrophysics Data System (ADS)

    Ma, Mengli; Lei, En; Meng, Hengling; Wang, Tiantao; Xie, Linyan; Shen, Dong; Xianwang, Zhou; Lu, Bingyue

    2017-08-01

    Amomum tsao-ko is a commercial plant that used for various purposes in medicinal and food industries. For the present investigation, 44 germplasm samples were collected from Jinping County of Yunnan Province. Clusters analysis and 2-dimensional principal component analysis (PCA) was used to represent the genetic relations among Amomum tsao-ko by using simple sequence repeat (SSR) markers. Clustering analysis clearly distinguished the samples groups. Two major clusters were formed; first (Cluster I) consisted of 34 individuals, the second (Cluster II) consisted of 10 individuals, Cluster I as the main group contained multiple sub-clusters. PCA also showed 2 groups: PCA Group 1 included 29 individuals, PCA Group 2 included 12 individuals, consistent with the results of cluster analysis. The purpose of the present investigation was to provide information on genetic relationship of Amomum tsao-ko germplasm resources in main producing areas, also provide a theoretical basis for the protection and utilization of Amomum tsao-ko resources.

  8. Principal component analysis-based anatomical motion models for use in adaptive radiation therapy of head and neck cancer patients

    NASA Astrophysics Data System (ADS)

    Chetvertkov, Mikhail A.

    Purpose: To develop standard and regularized principal component analysis (PCA) models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients, assess their potential use in adaptive radiation therapy (ART), and to extract quantitative information for treatment response assessment. Methods: Planning CT (pCT) images of H&N patients were artificially deformed to create "digital phantom" images, which modeled systematic anatomical changes during Radiation Therapy (RT). Artificial deformations closely mirrored patients' actual deformations, and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms), and between pCT and clinical CBCTs. Patient-specific standard PCA (SPCA) and regularized PCA (RPCA) models were built from these synthetic and clinical DVF sets. Eigenvectors, or eigenDVFs (EDVFs), having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Modeled anatomies were used to assess the dose deviations with respect to the planned dose distribution. Results: PCA models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade SPCA's ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes, and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. For dose assessment it has been shown that the modeled dose distribution was different from the planned dose for the parotid glands due to their shrinkage and shift into

  9. [Feature Extraction for Cough-sound Recognition Based on Principle Component Analysis and Non-uniform Filter-bank].

    PubMed

    Zhu, Chumei; Mo, Hongqiang; Tian, Lainfang; Zheng, Zeguang

    2015-08-01

    Cough recognition provides important clinical information for the treatment of many respiratory diseases. A new Mel frequency cepstrum coefficient (MFCC) extracting method has been proposed on the basis of the distributional characteristics of cough spectrum. The whole frequency band was divided into several sub-bands, and the energy coefficient for each band was obtained by method of principle component analysis. Then non-uniform filter-bank in Mel frequency is designed to improve the extracting process of MFCC by distributing filters according to the spectrum energy coefficients. Cough recognition experiment using hidden Markov model was carried out, and the results

  10. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  11. Structured Functional Principal Component Analysis

    PubMed Central

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M.; Greven, Sonja

    2015-01-01

    Summary Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  12. System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Wada, Ted S.

    In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the acoustic echo cancellation (AEC) problem. Such a “system” perspective aims for the integration of individual components, or algorithms, into a cohesive unit for the benefit of the system as a whole to cope with real-world enhancement problems. The standard system identification approach by minimizing the mean square error (MSE) of a linear system is sensitive to distortions that greatly affect the quality of the identification result. Therefore, we begin by examining in detail the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of the LMS algorithm when there is an interference at the near end, where the source of distortion may be linear or nonlinear. We provide a thorough derivation and analysis of the error recovery nonlinearity (ERN) that “enhances” the filter estimation error prior to the adaptation to transform the corrupted error’s distribution into a desired one, or very close to it, in order to assist the linear adaptation process. We reveal important connections of the residual echo enhancement (REE) technique to other existing AEC and signal enhancement procedures, where the technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. Notably, the single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. Indeed, SBSS optimized via ICA leads to the system combination of the LMS algorithm with the ERN that allows continuous and stable adaptation even during double talk. Next, we extend the system perspective

  13. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  14. [Discrimination of varieties of apple using near infrared spectra based on principal component analysis and artificial neural network model].

    PubMed

    He, Yong; Li, Xiao-Li; Shao, Yong-Ni

    2006-05-01

    A new method for the discrimination of varieties of apple by means of near infrared spectroscopy (NIRS) was developed. First, principal component analysis (PCA) was used to compress thousands of spectral data into several variables and describe the body of spectra, the analysis suggested that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98%, and the 2-dimentional plot was drawn with the scores of PC1 and PC2. It appeared to provide the best clustering of the varieties of apple. The loading plot was drawn with PC1 and PC2 through the whole wavelength region. The fingerprint spectra, which were sensitive to the variety of apple, were obtained from the loading plot. The fingerprint spectra were applied as ANN-BP inputs. Seventy five samples from three varieties were selected randomly, then they were used to build discrimination model. This model was used to predict the varieties of 15 unknown samples; the distinguishing rate of 100% was achieved. This model is reliable and practicable. So the present paper could offer a new approach to the fast discrimination of varieties of apple.

  15. Modeling and Prediction of Monthly Total Ozone Concentrations by Use of an Artificial Neural Network Based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Surajit; Chattopadhyay, Goutami

    2012-10-01

    In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.

  16. Independent component analysis in spiking neurons.

    PubMed

    Savin, Cristina; Joshi, Prashant; Triesch, Jochen

    2010-04-22

    Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.

  17. Component protection based automatic control

    SciTech Connect

    Otaduy, P J

    1992-03-01

    Control and safety systems as well as operation procedures are designed on the basis of critical process parameters limits. The expectation is that short and long term mechanical damage and process failures will be avoided by operating the plant within the specified constraints envelopes. In this paper, one of the Advanced Liquid Metal Reactor (ALMR) design duty cycles events is discussed to corroborate that the time has come to explicitly make component protection part of the control system. Component stress assessment and aging data should be an integral part of the control system. Then transient trajectory planning and operating limits could be aimed at minimizing component specific and overall plant component damage cost functions. The impact of transients on critical components could then be managed according to plant lifetime design goals. The need for developing methodologies for online transient trajectory planning and assessment of operating limits in order to facilitate the explicit incorporation of damage assessment capabilities to the plant control and protection systems is discussed. 12 refs.

  18. Discrimination of rice panicles by hyperspectral reflectance data based on principal component analysis and support vector classification*

    PubMed Central

    Liu, Zhan-yu; Shi, Jing-jing; Zhang, Li-wen; Huang, Jing-feng

    2010-01-01

    Detection of crop health conditions plays an important role in making control strategies of crop disease and insect damage and gaining high-quality production at late growth stages. In this study, hyperspectral reflectance of rice panicles was measured at the visible and near-infrared regions. The panicles were divided into three groups according to health conditions: healthy panicles, empty panicles caused by Nilaparvata lugens Stål, and panicles infected with Ustilaginoidea virens. Low order derivative spectra, namely, the first and second orders, were obtained using different techniques. Principal component analysis (PCA) was performed to obtain the principal component spectra (PCS) of the foregoing derivative and raw spectra to reduce the reflectance spectral dimension. Support vector classification (SVC) was employed to discriminate the healthy, empty, and infected panicles, with the front three PCS as the independent variables. The overall accuracy and kappa coefficient were used to assess the classification accuracy of SVC. The overall accuracies of SVC with PCS derived from the raw, first, and second reflectance spectra for the testing dataset were 96.55%, 99.14%, and 96.55%, and the kappa coefficients were 94.81%, 98.71%, and 94.82%, respectively. Our results demonstrated that it is feasible to use visible and near-infrared spectroscopy to discriminate health conditions of rice panicles. PMID:20043354

  19. Evaluation of the aroma quality of Chinese traditional soy paste during storage based on principal component analysis.

    PubMed

    Peng, Xingyun; Li, Xin; Shi, Xiaodi; Guo, Shuntang

    2014-05-15

    Soy paste, a fermented soybean product, is widely used for flavouring in East and Southeast Asian countries. The characteristic aroma of soy paste is important throughout its shelf life. This study extracted volatile compounds via headspace solid-phase microextraction and conducted a quantitative analysis of 15 key volatile compounds using gas chromatography and gas chromatography-mass spectrum analysis. Changes in aroma content during storage time were analyzed using an acceleration model (40 °C, 28 days). In the 28 days of storage, results showed that among key soy paste volatile compounds, alcohol and aldehyde contents decreased by 35% and 26%, respectively. By contrast, acid, ester, and heterocycle contents increased by 130%, 242%, and 15%, respectively. The overall odour type transformed from a floral to a roasting aroma. According to sample clustering in the principal component analysis, the storage life of soy paste could be divided into three periods. These three periods represent the floral, roasting, and pungent aroma types of soy paste.

  20. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  1. Enantiomeric separation of isochromene derivatives by high-performance liquid chromatography using cyclodextrin based stationary phases and principal component analysis of the separation data.

    PubMed

    Nanayakkara, Yasith S; Woods, Ross M; Breitbach, Zachary S; Handa, Sachin; Slaughter, LeGrande M; Armstrong, Daniel W

    2013-08-30

    Isochromene derivatives are very important precursors in the natural products industry. Hence the enantiomeric separations of chiral isochromenes are important in the pharmaceutical industry and for organic asymmetric synthesis. Here we report enantiomeric separations of 21 different chiral isochromene derivatives, which were synthesized using alkynylbenzaldehyde cyclization catalyzed by chiral gold(I) acyclic diaminocarbene complexes. All separations were achieved by high-performance liquid chromatography with cyclodextrin based (Cyclobond) chiral stationary phases. Retention data of 21 chiral compounds and 14 other previously separated isochromene derivatives were analyzed using principal component analysis. The effect of the structure of the substituents on the isochromene ring on enantiomeric resolution as well as the other separation properties was analyzed in detail. Using principal component analysis it can be shown that the structural features that contribute to increased retention are different from those that enhance enantiomeric resolution. In addition, principal component analysis is useful for eliminating redundant factors from consideration when analyzing the effect of various chromatographic parameters. It was found that the chiral recognition mechanism is different for the larger γ-cyclodextrin as compared to the smaller β-cyclodextrin derivatives. Finally this specific system of chiral analytes and cyclodextrin based chiral selectors provides an effective format to examine the application of principal component analysis to enantiomeric separations using basic retention data and structural features. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    PubMed

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  3. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  4. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  5. Applications of hierarchical cluster analysis (CLA) and principal component analysis (PCA) in feed structure and feed molecular chemistry research, using synchrotron-based Fourier transform infrared (FTIR) microspectroscopy.

    PubMed

    Yu, Peiqiang

    2005-09-07

    Synchrotron technology based Fourier transform infrared microspectroscopy (S-FTIR) is a recently emerging bioanalytical microprobe capable of exploring the molecular chemistry within microstructures of feed tissues at a cellular or subcellular level. To date there has been very little application of hierarchical cluster analysis (CLA) and principal component analysis (PCA) to the study of feed inherent microstructures and feed molecular chemistry between feeds and/or between different structures within a feed, in relation to feed quality and nutrient availability using S-FTIR. In this paper, multivariate statistical methods--CLA and PCA--were used to analyze synchrotron-based FTIR individual spectra obtained from feed inherent microstructures within intact tissues by using the S-FTIR as a novel approach. The S-FTIR spectral data of three feed inherent structures (strucutre 1, feed pericarp; structure 2, feed aleurone; structure 3, feed endosperm) and different varieties of feeds within cellular dimensions were collected at the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL), U.S. Department of Energy (NSLS-BNL, New York). Both PCA and CLA methods gave satisfactory analytical results and are conclusive in showing that they can discriminate and classify inherent structures and molecular chemistry between and among the feed tissues. They also can be used to identify whether differences exist between the varieties. These statistical analyses place synchrotron-based FTIR microspectroscopy at the forefront of those new potential techniques that could be used in rapid, nondestructive, and noninvasive screening of feed intrinsic microstructures and feed molecular chemistry in relation to the quality and nutritive value of feeds.

  6. Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  7. Fusion-component lifetime analysis

    SciTech Connect

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR.

  8. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    PubMed

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization.

  9. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  10. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  11. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  12. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    NASA Astrophysics Data System (ADS)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  13. Fit of evidence-based treatment components to youths served by wraparound process: a relevance mapping analysis.

    PubMed

    Bernstein, Adam; Chorpita, Bruce F; Rosenblatt, Abram; Becker, Kimberly D; Daleiden, Eric L; Ebesutani, Chad K

    2015-01-01

    This study investigated whether and which evidence-based treatment (EBT) components might generalize to youths served by the wraparound process. To examine these questions, the study used relevance mapping, an empirical methodology that compares youths in a given clinical population with participants in published randomized trials to determine who may be "coverable" by EBTs and which treatments may collectively be most applicable. In a large diverse clinical sample, youths receiving wraparound services (n = 828) were compared with youths receiving other services (n = 3,104) regarding (a) demographic and clinical profiles, (b) "coverability" by any EBTs, and (c) specific practices from those EBTs that most efficiently applied to each group. Participants in studies of EBTs matched the demographic and clinical characteristics of nearly as many youths receiving wraparound (58-59%) as those receiving non-wraparound services (61-64%). Moreover, the best-fitting solutions of relevant sets of practices were highly similar across groups. These results provide the first large-scale empirical characterization of fit between EBTs and youths receiving wraparound and suggest that these youths are well suited to benefit from clinical strategies commonly used in EBTs.

  14. The Components of Microbiological Risk Analysis.

    PubMed

    Liuzzo, Gaetano; Bentley, Stefano; Giacometti, Federica; Serraino, Andrea

    2015-02-03

    The paper describes the process of risk analysis in a food safety perspective. The steps of risk analysis defined as a process consisting of three interconnected components (risk assessment, risk management, and risk communication) are analysed. The different components of the risk assessment, risk management and risk communication are further described.

  15. The Components of Microbiological Risk Analysis

    PubMed Central

    Bentley, Stefano; Giacometti, Federica; Serraino, Andrea

    2015-01-01

    The paper describes the process of risk analysis in a food safety perspective. The steps of risk analysis defined as a process consisting of three interconnected components (risk assessment, risk management, and risk communication) are analysed. The different components of the risk assessment, risk management and risk communication are further described. PMID:27800384

  16. Suppression of inter-device variation for component analysis of turbid liquids based on spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Shengzhao; Zhang, Linna; Li, Gang; Lin, Ling

    2017-03-01

    Diffuse reflectance spectroscopy is a useful tool for obtaining quantitative information in turbid media, which is always achieved by developing a multivariate regression model that links the spectral signal to the component concentrations. However, in most cases, variations between the actual measurement and the modeling process of the device may cause errors in predicting a component's concentration. In this paper, we propose a data-processing method to resist these variations. The method involves performing a curve fitting of the multiple-position diffuse reflectance spectral data. One of the parameters in the fitting function was found to be insensitive to inter-device variations and sensitive to the component concentrations. The parameter of the fitted equation was used in the modeling instead of directly using the spectral signal. Experiments demonstrate the feasibility of the proposed method and its resistance to errors induced by inter-device variations.

  17. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  18. Comparison of dimension reduction-based logistic regression models for case-control genome-wide association study: principal components analysis vs. partial least squares

    PubMed Central

    Yi, Honggang; Wo, Hongmei; Zhao, Yang; Zhang, Ruyang; Dai, Junchen; Jin, Guangfu; Ma, Hongxia; Wu, Tangchun; Hu, Zhibin; Lin, Dongxin; Shen, Hongbing; Chen, Feng

    2015-01-01

    Abstract With recent advances in biotechnology, genome-wide association study (GWAS) has been widely used to identify genetic variants that underlie human complex diseases and traits. In case-control GWAS, typical statistical strategy is traditional logistical regression (LR) based on single-locus analysis. However, such a single-locus analysis leads to the well-known multiplicity problem, with a risk of inflating type I error and reducing power. Dimension reduction-based techniques, such as principal component-based logistic regression (PC-LR), partial least squares-based logistic regression (PLS-LR), have recently gained much attention in the analysis of high dimensional genomic data. However, the performance of these methods is still not clear, especially in GWAS. We conducted simulations and real data application to compare the type I error and power of PC-LR, PLS-LR and LR applicable to GWAS within a defined single nucleotide polymorphism (SNP) set region. We found that PC-LR and PLS can reasonably control type I error under null hypothesis. On contrast, LR, which is corrected by Bonferroni method, was more conserved in all simulation settings. In particular, we found that PC-LR and PLS-LR had comparable power and they both outperformed LR, especially when the causal SNP was in high linkage disequilibrium with genotyped ones and with a small effective size in simulation. Based on SNP set analysis, we applied all three methods to analyze non-small cell lung cancer GWAS data. PMID:26243516

  19. Component analysis and target cell-based neuroactivity screening of Panax ginseng by ultra-performance liquid chromatography coupled with quadrupole-time-of-flight mass spectrometry.

    PubMed

    Yuan, Jinbin; Chen, Yang; Liang, Jian; Wang, Chong-Zhi; Liu, Xiaofei; Yan, Zhihong; Tang, Yi; Li, Jiankang; Yuan, Chun-Su

    2016-12-01

    Ginseng is one of the most widely used natural medicines in the world. Recent studies have suggested Panax ginseng has a wide range of beneficial effects on aging, central nervous system disorders, and neurodegenerative diseases. However, knowledge about the specific bioactive components of ginseng is still limited. This work aimed to screen for the bioactive components in Panax ginseng that act against neurodegenerative diseases, using the target cell-based bioactivity screening method. Firstly, component analysis of Panax ginseng extracts was performed by UPLC-QTOF-MS, and a total of 54 compounds in white ginseng were characterized and identified according to the retention behaviors, accurate MW, MS characteristics, parent nucleus, aglycones, side chains, and literature data. Then target cell-based bioactivity screening method was developed to predict the candidate compounds in ginseng with SH-SY5Y cells. Four ginsenosides, Rg2, Rh1, Ro, and Rd, were observed to be active. The target cell-based bioactivity screening method coupled with UPLC-QTOF-MS technique has suitable sensitivity and it can be used as a screening tool for low content bioactive constituents in natural products.

  20. Face Recognition by Independent Component Analysis

    PubMed Central

    Bartlett, Marian Stewart; Movellan, Javier R.; Sejnowski, Terrence J.

    2010-01-01

    A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. PMID:18244540

  1. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  2. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  3. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  4. Compressive-projection principal component analysis.

    PubMed

    Fowler, James E

    2009-10-01

    Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.

  5. Calculation of the elastic properties of prosthetic knee components with an iterative finite element-based modal analysis: quantitative comparison of different measuring techniques.

    PubMed

    Woiczinski, Matthias; Tollrian, Christopher; Schröder, Christian; Steinbrück, Arnd; Müller, Peter E; Jansson, Volkmar

    2013-08-01

    With the aging but still active population, research on total joint replacements relies increasingly on numerical methods, such as finite element analysis, to improve wear resistance of components. However, the validity of finite element models largely depends on the accuracy of their material behavior and geometrical representation. In particular, material properties are often based on manufacturer data or literature reports, but can alternatively be estimated by matching experimental measurements and structural predictions through modal analyses and identification of eigenfrequencies. The aim of the present study was to compare the accuracy of common setups used for estimating the eigenfrequencies of typical components often used in prosthetized joints. Eigenfrequencies of cobalt-chrome and ultra-high-molecular weight polyethylene components were therefore measured with four different setups, and used in modal analyses of corresponding finite element models for an iterative adjustment of their material properties. Results show that for the low-damped cobalt chromium endoprosthesis components, all common measuring setups provided accurate measurements. In the case of high-damped structures, measurements were only possible with setups including a continuously excitation system such as electrodynamic shakers. This study demonstrates that the iterative back-calculation of eigenfrequencies can be a reliable method to estimate the elastic properties for finite element models.

  6. Finite Element Based Stress Analysis of Graphite Component in High Temperature Gas Cooled Reactor Core Using Linear and Nonlinear Irradiation Creep Models

    SciTech Connect

    Mohanty, Subhasish; Majumdar, Saurindranath

    2015-01-01

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  7. Accurate lithography hotspot detection based on principal component analysis-support vector machine classifier with hierarchical data clustering

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Gao, Jhih-Rong; Ding, Duo; Zeng, Xuan; Pan, David Z.

    2015-01-01

    As technology nodes continue to shrink, layout patterns become more sensitive to lithography processes, resulting in lithography hotspots that need to be identified and eliminated during physical verification. We propose an accurate hotspot detection approach based on principal component analysis-support vector machine classifier. Several techniques, including hierarchical data clustering, data balancing, and multilevel training, are provided to enhance the performance of the proposed approach. Our approach is accurate and more efficient than conventional time-consuming lithography simulation and provides a high flexibility for adapting to new lithography processes and rules.

  8. SAR calibration and principal component analysis

    NASA Technical Reports Server (NTRS)

    Quegan, S.; Dutra, L. V.

    1991-01-01

    Principal components analysis of ideal, complex quadpolarized synthetic aperture data tells us how the information is encoded in the eigenstructure of the data from homogeneous, distributed targets. Comparison with real data indicates distortion of the correlation matrices by system effects. The implications for measurement of the terms in the true correlation matrix are displayed. These are related to how accurately we can make inferences about the products of principle components analysis.

  9. Components of Task-Based Needs Analysis of the ESP Learners with the Specialization of Business and Tourism

    ERIC Educational Resources Information Center

    Poghosyan, Naira

    2016-01-01

    In the following paper we shall thoroughly analyze the target learning needs of the learners within an ESP (English for Specific Purposes) context. The main concerns of ESP have always been and remain with the needs analysis, text analysis and preparing learners to communicate effectively in the tasks prescribed by their study or work situation.…

  10. Quality assessment of Herba Leonuri based on the analysis of multi-components using normal- and reversed-phase chromatographic methods.

    PubMed

    Dong, Shuya; He, Jiao; Hou, Huiping; Shuai, Yaping; Wang, Qi; Yang, Wenling; Sun, Zheng; Li, Qing; Bi, Kaishun; Liu, Ran

    2017-09-27

    A novel, improved and comprehensive method for quality evaluation and discrimination of Herba Leonuri has been developed and validated based on normal- and reversed-phase chromatographic methods. To identify Herba Leonuri, normal- and reversed-phase high-performance thin-layer chromatography fingerprints were obtained by comparing the colors and Rf values of the bands, and reversed-phase HPLC fingerprints were obtained by using an Agilent Poroshell 120 SB-C18 within 28 min. By similarity analysis and hierarchical clustering analysis, we show that there are similar chromatographic patterns in Herba Leonuri samples, but significant differences in counterfeits and variants. To quantify the bio-active components of Herba Leonuri, reversed-phase HPLC was performed to analyze syringate, leonurine, quercetin-3-O-robiniaglycoside, hyperoside, rutin, isoquercitrin, wogonin and genkwanin simultaneously by single standard to determine multi-components method with rutin as internal standard. Meanwhile, normal-phase HPLC was performed by using an Agilent ZORBAX HILIC Plus within 6 min to determine trigonelline and stachydrine using trigonelline as internal standard. Innovatively, among these compounds, bio-active components of quercetin-3-O-robiniaglycoside and trigonelline were first determined in Herba Leonuri. In general, the method integrating multi-chromatographic analyses offered an efficient way for the standardization and identification of Herba Leonuri. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  12. Component fragilities. Data collection, analysis and interpretation

    SciTech Connect

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists.

  13. Independent component analysis for biomedical signals.

    PubMed

    James, Christopher J; Hesse, Christian W

    2005-02-01

    Independent component analysis (ICA) is increasing in popularity in the field of biomedical signal processing. It is generally used when it is required to separate measured multi-channel biomedical signals into their constituent underlying components. The use of ICA has been facilitated in part by the free availability of toolboxes that implement popular flavours of the techniques. Fundamentally ICA in biomedicine involves the extraction and separation of statistically independent sources underlying multiple measurements of biomedical signals. Technical advances in algorithmic developments implementing ICA are reviewed along with new directions in the field. These advances are specifically summarized with applications to biomedical signals in mind. The basic assumptions that are made when applying ICA are discussed, along with their implications when applied particularly to biomedical signals. ICA as a specific embodiment of blind source separation (BSS) is also discussed, and as a consequence the criterion used for establishing independence between sources is reviewed and this leads to the introduction of ICA/BSS techniques based on time, frequency and joint time-frequency decomposition of the data. Finally, advanced implementations of ICA are illustrated as applied to neurophysiologic signals in the form of electro-magnetic brain signals data.

  14. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome.

  15. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  16. PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Wang, Weichen

    2016-01-01

    This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374

  17. Principal Component Analysis in ECG Signal Processing

    NASA Astrophysics Data System (ADS)

    Castells, Francisco; Laguna, Pablo; Sörnmo, Leif; Bollmann, Andreas; Roig, José Millet

    2007-12-01

    This paper reviews the current status of principal component analysis in the area of ECG signal processing. The fundamentals of PCA are briefly described and the relationship between PCA and Karhunen-Loève transform is explained. Aspects on PCA related to data with temporal and spatial correlations are considered as adaptive estimation of principal components is. Several ECG applications are reviewed where PCA techniques have been successfully employed, including data compression, ST-T segment analysis for the detection of myocardial ischemia and abnormalities in ventricular repolarization, extraction of atrial fibrillatory waves for detailed characterization of atrial fibrillation, and analysis of body surface potential maps.

  18. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study.

    PubMed

    Kreuzer, Stefan W; Pourmoghaddam, Amir; Leffers, Kevin J; Johnson, Clint W; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation.

  19. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study

    PubMed Central

    Kreuzer, Stefan W.; Pourmoghaddam, Amir; Leffers, Kevin J.; Johnson, Clint W.; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation. PMID:26881086

  20. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-02-21

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  1. Pse-Analysis: a python package for DNA/RNA and protein/peptide sequence analysis based on pseudo components and kernel methods

    PubMed Central

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-01-01

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix. PMID:28076851

  2. Energy component analysis of π interactions.

    PubMed

    Sherrill, C David

    2013-04-16

    Fundamental features of biomolecules, such as their structure, solvation, and crystal packing and even the docking of drugs, rely on noncovalent interactions. Theory can help elucidate the nature of these interactions, and energy component analysis reveals the contributions from the various intermolecular forces: electrostatics, London dispersion terms, induction (polarization), and short-range exchange-repulsion. Symmetry-adapted perturbation theory (SAPT) provides one method for this type of analysis. In this Account, we show several examples of how SAPT provides insight into the nature of noncovalent π-interactions. In cation-π interactions, the cation strongly polarizes electrons in π-orbitals, leading to substantially attractive induction terms. This polarization is so important that a cation and a benzene attract each other when placed in the same plane, even though a consideration of the electrostatic interactions alone would suggest otherwise. SAPT analysis can also support an understanding of substituent effects in π-π interactions. Trends in face-to-face sandwich benzene dimers cannot be understood solely in terms of electrostatic effects, especially for multiply substituted dimers, but SAPT analysis demonstrates the importance of London dispersion forces. Moreover, detailed SAPT studies also reveal the critical importance of charge penetration effects in π-stacking interactions. These effects arise in cases with substantial orbital overlap, such as in π-stacking in DNA or in crystal structures of π-conjugated materials. These charge penetration effects lead to attractive electrostatic terms where a simpler analysis based on atom-centered charges, electrostatic potential plots, or even distributed multipole analysis would incorrectly predict repulsive electrostatics. SAPT analysis of sandwich benzene, benzene-pyridine, and pyridine dimers indicates that dipole/induced-dipole terms present in benzene-pyridine but not in benzene dimer are relatively

  3. EVMDD-Based Analysis and Diagnosis Methods of Multi-State Systems with Multi-State Components

    DTIC Science & Technology

    2014-01-01

    multi-state systems efficiently, methods based on binary decision diagrams ( BDDs ) [1,2,4,22] and multi-valued decision diagrams (MDDs) [8,15,19,20] have...functions, they can be rep- resented by BDDs and MDDs. Probabilities of states can be computed using BDDs and MDDs where the time complexity is proportional...to the number of nodes in a decision diagram. BDDs represent structure functions by con- verting multi-valued variables and function values into

  4. Waveguide-based terahertz metamaterial functional components

    NASA Astrophysics Data System (ADS)

    Wang, Z. G.; Zhou, Y. Q.; Yang, L. M.; Gong, Cheng

    2017-09-01

    We suggest a flexible platform based on waveguides for constructing metamaterial functional components which work in the terahertz waveband. The properties of the components can be changed by selecting the specified metamaterial resonance structures on the waveguide’s narrow wall. In the paper, metallic circular patches are used as resonance structures to design the functional component which can be configured as a filter or absorber. The simulation and experimental results demonstrate that the component’s attenuation and absorption bandwidth can be configured by changing the quantities and sizes of the resonance structures.

  5. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  6. Developing and Deploying a Partnership Network Knowledge Base for Analysis of the Partners and Components within NASA's Earth Science Community.

    NASA Astrophysics Data System (ADS)

    Anderson, D.; Lewis, D.; O'Hara, C.; Katragadda, S.

    2006-12-01

    The Partnership Network Knowledge Base (PNKB) is being developed to provide connectivity and deliver content for the research information needs of NASA's Applied Science Program and related scientific communities of practice. Data has been collected which will permit users to identify and analyze the current network of interactions between organizations within the community of practice, harvest research results fixed to those interactions, and identify potential collaborative opportunities to further research streams. The PNKB is being developed in parallel with the Research Projects Knowledge Base (RPKB) and will be deployed in a manner that is fully compatible and interoperable with the NASA enterprise architecture (EA). Information needs have been assessed through a survey of potential users, evaluations of existing NASA resource users, and collaboration between Stennis Space Center and The Mississippi Research Consortium (MRC). The PNKB will assemble information on funded research institutions and categorize the research emphasis of each as it relates to NASA's six major science focus areas and 12 national applications. The PNKB will include information about organizations that conduct NASA Earth Science research such as, principal investigators' affiliation, contact information, relationship-type with NASA and other NASA partners, funding arrangements, and formal agreements like memoranda-of-understanding. To further the utility of the PNKB, relational links have been integrated into the RPKB - which will contain data about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The combined PNKB and RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry.

  7. Independent component analysis for audio signal separation

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Gnann, Volker

    2005-10-01

    In this paper an audio separation algorithm is presented, which is based on Independent Component Analysis (ICA). Audio separation could be the basis for many applications for example in the field of telecommunications, quality enhancement of audio recordings or audio classification tasks. Well known ICA algorithms are not usable for real-world recordings at the time, because they are designed for signal mixtures based on linear and over time constant mixing matrices. To adapt a standard ICA algorithm for real-world two-channel auditory scenes with two audio sources, the input audio streams are segmented in the time domain and a constant mixing matrix within a segment is assumed. The next steps are a time-delay estimation for each audio source in the mixture and a determination of the number of existing sources. In the following processing steps, for each source the input signals are time shifted and a standard ICA for linear mixtures is performed. After that, the remaining tasks are an evaluation of the ICA results and the construction of the resulting audio streams containing the separated sources.

  8. Principal Component Analysis of Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Cleary, Colleen; Rodriguez, David

    2017-01-01

    Principal component analysis is a technique for reducing variables and emphasizing patterns in a data set. In this study, the data set consisted of the attributes of 174 brown dwarfs. The PCA was performed on several photometric measurements in near-infrared wavelengths and colors in order to determine if these variables showed a correlation with the physical parameters. This research resulted in two separate models that predict luminosity and temperature. The application of principal component analysis on the near-infrared photometric measurements and colors of brown dwarfs, along with models, provides alternate methods for predicting the luminosity and temperature of brown dwarfs using only photometric measurements.

  9. Identifying fouling events in a membrane-based drinking water treatment process using principal component analysis of fluorescence excitation-emission matrices.

    PubMed

    Peiris, Ramila H; Hallé, Cynthia; Budman, Hector; Moresoli, Christine; Peldszus, Sigrid; Huck, Peter M; Legge, Raymond L

    2010-01-01

    The identification of key foulants and the provision of early warning of high fouling events for drinking water treatment membrane processes is crucial for the development of effective countermeasures to membrane fouling, such as pretreatment. Principal foulants include organic, colloidal and particulate matter present in the membrane feed water. In this research, principal component analysis (PCA) of fluorescence excitation-emission matrices (EEMs) was identified as a viable tool for monitoring the performance of pre-treatment stages (in this case biological filtration), as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. In addition, fluorescence EEM-based principal component (PC) score plots, generated using the fluorescence EEMs obtained after just 1hour of UF or NF operation, could be related to high fouling events likely caused by elevated levels of particulate/colloid-like material in the biofilter effluents. The fluorescence EEM-based PCA approach presented here is sensitive enough to be used at low organic carbon levels and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken.

  10. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  11. A Parallel Product-Convolution approach for representing the depth varying Point Spread Functions in 3D widefield microscopy based on principal component analysis.

    PubMed

    Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A

    2010-03-29

    We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.

  12. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  13. Principal Component Based Diffeomorphic Surface Mapping

    PubMed Central

    Younes, Laurent; Miller, Michael I.

    2013-01-01

    We present a new diffeomorphic surface mapping algorithm under the framework of large deformation diffeomorphic metric mapping (LDDMM). Unlike existing LDDMM approaches, this new algorithm reduces the complexity of the estimation of diffeomorphic transformations by incorporating a shape prior in which a nonlinear diffeomorphic shape space is represented by a linear space of initial momenta of diffeomorphic geodesic flows from a fixed template. In addition, for the first time, the diffeomorphic mapping is formulated within a decision-theoretic scheme based on Bayesian modeling in which an empirical shape prior is characterized by a low dimensional Gaussian distribution on initial momentum. This is achieved using principal component analysis (PCA) to construct the eigenspace of the initial momentum. A likelihood function is formulated as the conditional probability of observing surfaces given any particular value of the initial momentum, which is modeled as a random field of vector-valued measures characterizing the geometry of surfaces. We define the diffeomorphic mapping as a problem that maximizes a posterior distribution of the initial momentum given observable surfaces over the eigenspace of the initial momentum. We demonstrate the stability of the initial momentum eigenspace when altering training samples using a bootstrapping method. We then validate the mapping accuracy and show robustness to outliers whose shape variation is not incorporated into the shape prior. PMID:21937344

  14. Principal component based diffeomorphic surface mapping.

    PubMed

    Qiu, Anqi; Younes, Laurent; Miller, Michael I

    2012-02-01

    We present a new diffeomorphic surface mapping algorithm under the framework of large deformation diffeomorphic metric mapping (LDDMM). Unlike existing LDDMM approaches, this new algorithm reduces the complexity of the estimation of diffeomorphic transformations by incorporating a shape prior in which a nonlinear diffeomorphic shape space is represented by a linear space of initial momenta of diffeomorphic geodesic flows from a fixed template. In addition, for the first time, the diffeomorphic mapping is formulated within a decision-theoretic scheme based on Bayesian modeling in which an empirical shape prior is characterized by a low dimensional Gaussian distribution on initial momentum. This is achieved using principal component analysis (PCA) to construct the eigenspace of the initial momentum. A likelihood function is formulated as the conditional probability of observing surfaces given any particular value of the initial momentum, which is modeled as a random field of vector-valued measures characterizing the geometry of surfaces. We define the diffeomorphic mapping as a problem that maximizes a posterior distribution of the initial momentum given observable surfaces over the eigenspace of the initial momentum. We demonstrate the stability of the initial momentum eigenspace when altering training samples using a bootstrapping method. We then validate the mapping accuracy and show robustness to outliers whose shape variation is not incorporated into the shape prior.

  15. Stochastic convex sparse principal component analysis.

    PubMed

    Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu

    2016-12-01

    Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.

  16. Principal component analysis of phenolic acid spectra

    USDA-ARS?s Scientific Manuscript database

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  17. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  18. Principal component analysis implementation in Java

    NASA Astrophysics Data System (ADS)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  19. Selection of principal components based on Fisher discriminant ratio

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyan; Naghedolfeizi, Masoud; Arora, Sanjeev; Yousif, Nabil; Aberra, Dawit

    2016-05-01

    Principal component analysis transforms a set of possibly correlated variables into uncorrelated variables, and is widely used as a technique of dimensionality reduction and feature extraction. In some applications of dimensionality reduction, the objective is to use a small number of principal components to represent most variation in the data. On the other hand, the main purpose of feature extraction is to facilitate subsequent pattern recognition and machine learning tasks, such as classification. Selecting principal components for classification tasks aims for more than dimensionality reduction. The capability of distinguishing different classes is another major concern. Components that have larger eigenvalues do not necessarily have better distinguishing capabilities. In this paper, we investigate a strategy of selecting principal components based on the Fisher discriminant ratio. The ratio of between class variance to within class variance is calculated for each component, based on which the principal components are selected. The number of relevant components is determined by the classification accuracy. To alleviate overfitting which is common when there are few training data available, we use a cross-validation procedure to determine the number of principal components. The main objective is to select the components that have large Fisher discriminant ratios so that adequate class separability is obtained. The number of selected components is determined by the classification accuracy of the validation data. The selection method is evaluated by face recognition experiments.

  20. Medical diagnosis of atherosclerosis from Carotid Artery Doppler Signals using principal component analysis (PCA), k-NN based weighting pre-processing and Artificial Immune Recognition System (AIRS).

    PubMed

    Latifoğlu, Fatma; Polat, Kemal; Kara, Sadik; Güneş, Salih

    2008-02-01

    In this study, we proposed a new medical diagnosis system based on principal component analysis (PCA), k-NN based weighting pre-processing, and Artificial Immune Recognition System (AIRS) for diagnosis of atherosclerosis from Carotid Artery Doppler Signals. The suggested system consists of four stages. First, in the feature extraction stage, we have obtained the features related with atherosclerosis disease using Fast Fourier Transformation (FFT) modeling and by calculating of maximum frequency envelope of sonograms. Second, in the dimensionality reduction stage, the 61 features of atherosclerosis disease have been reduced to 4 features using PCA. Third, in the pre-processing stage, we have weighted these 4 features using different values of k in a new weighting scheme based on k-NN based weighting pre-processing. Finally, in the classification stage, AIRS classifier has been used to classify subjects as healthy or having atherosclerosis. Hundred percent of classification accuracy has been obtained by the proposed system using 10-fold cross validation. This success shows that the proposed system is a robust and effective system in diagnosis of atherosclerosis disease.

  1. Modeling the Correlation of Composition-Processing-Property for TC11 Titanium Alloy Based on Principal Component Analysis and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Zeng, Weidong; Zhao, Yongqing; Shao, Yitao; Zhou, Yigang

    2012-11-01

    In the present investigation, the correlation of composition-processing-property for TC11 titanium alloy was established using principal component analysis (PCA) and artificial neural network (ANN) based on the experimental datasets obtained from the forging experiments. During the PCA step, the feature vector is extracted by calculating the eigenvalue of correlation coefficient matrix for training dataset, and the dimension of input variables is reduced from 11 to 6 features. Thus, PCA offers an efficient method to characterize the data with a high degree of dimensionality reduction. During the ANN step, the principal components were chosen as the input parameters and the mechanical properties as the output parameters, including the ultimate tensile strength ( \\upsigma_{{b}} ), yield strength ( \\upsigma_{0.2} ), elongation ( \\updelta ), and reduction of area (φ). The training of ANN model was conducted using back-propagation learning algorithm. The results clearly present ideal agreement between the predicted value of PCA-ANN model and experimental value, indicating that the established model is a powerful tool to construct the correlation of composition-processing-property for TC11 titanium alloy. More importantly, the integrated method of PCA and ANN is also able to be utilized as the mechanical property prediction for the other alloys.

  2. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  3. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  4. Principle component analysis in F/10 and G/11 xylanase.

    PubMed

    Liu, Liangwei; Zhang, Jue; Chen, Bin; Shao, Weilan

    2004-09-10

    A bioinformatics method was used to analyze F/10 and G/11 xylanase basing on principle component analysis, and a model was made to classify between these two folds with an ideal result. The principle components were predicated to be secondary structures, the components were analyzed with the architecture of each family, and found comparable with (beta/alpha)(8)-barrel of F/10 xylanase and right-hand structure of G/11 xylanase. Compared with sequence similarities, this method gave discriminating features a clear meaning. The largest component did not appear in the model, which revealed no difference between these two families.

  5. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    ERIC Educational Resources Information Center

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  6. PCA: Principal Component Analysis for spectra modeling

    NASA Astrophysics Data System (ADS)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  7. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  8. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  9. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data.

  10. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  11. Nonlinear principal component analysis of climate data

    NASA Astrophysics Data System (ADS)

    Monahan, Adam Hugh

    2000-11-01

    A nonlinear generalisation of Principal Component Analysis (PCA), denoted Nonlinear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of climate data. It is found empirically that NLPCA partitions variance in the same fashion as does PCA. An important distinction is drawn between a modal P-dimensional NLPCA analysis, in which the approximation is the sum of P nonlinear functions of one variable, and a nonmodal analysis, in which the P-dimensional NLPCA approximation is determined as a nonlinear non- additive function of P variables. Nonlinear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor. The 1D and 2D NLPCA approximations explain 76% and 99.5% of the total variance, respectively, in contrast to 60% and 95% explained by the 1D and 2D PCA approximations. When applied to a data set consisting of monthly-averaged tropical Pacific Ocean sea surface temperatures (SST), the modal 1D NLPCA approximation describes average variability associated with the El Niño/Southern Oscillation (ENSO) phenomenon, as does the 1D PCA approximation. The NLPCA approximation, however, characterises the asymmetry in spatial pattern of SST anomalies between average warm and cold events in a manner that the PCA approximation cannot. The second NLPCA mode of SST is found to characterise differences in ENSO variability between individual events, and in particular is consistent with the celebrated 1977 ``regime shift''. A 2D nonmodal NLPCA approximation is determined, the interpretation of which is complicated by the fact that a secondary feature extraction problem has to be carried out. It is found that this approximation contains much the same information as that provided by the modal analysis. A modal NLPC analysis of tropical Indo-Pacific sea level pressure (SLP) finds that the first mode describes average ENSO variability in this field, and also characterises an asymmetry in SLP fields between average warm and

  12. Principal components analysis of Jupiter VIMS spectra

    USGS Publications Warehouse

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, Christophe; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  13. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  14. [Assessment of aquatic ecosystem health based on principal component analysis with entropy weight: a case study of Wanning Reservoir (Hainan Island, China)].

    PubMed

    Xie, Fei; Gu, Ji-Guang; Lin, Zhang-Wen

    2014-06-01

    A new assessment method based on principal component analysis (PCA) and entropy weight for ecosystem health was applied to Wanning Reservoir, Hainan Island, China to investigate whether the new method could solve the overlap in weighting which existed in the traditional entropy weight-based method for ecosystem health. The results showed that, the ecosystem health status of Wanning Reservoir showed an improvement trend overall from 2010 to 2012; the means of ecosystem health comprehensive index (EHCI) in each year were 0.534, 0.617, 0.634 for 2010, 2011 and 2012 respectively, and the ecosystem health status was III (medium), II (good), and II (good), respectively. In addition, the ecosystem health status of the reservoir displayed a weak seasonal variation. The variation of EHCI became smaller recently, showing that Wanning Reservoir tended to be relatively stable. Comparison of the weight of indices in the new and the traditional methods indicated that, the cumulative weight of the four indices (i. e., DO, COD, BOD, and NH(4+)-N) had a stronger correlation of 0.382 for the traditional one than that (0.178) for the new method. It suggested the application of PCA with entropy could avoid the overlap in weighting effectively. In addition, the correlation analysis between the trophic status index and EHCI showed significant negative correlation (P < 0.05), indicating that the new method based on PCA with entropy weight could improve not only the assignment of weighting but also the accuracy of the results. The new method here is suitable for evaluating ecosystem health of the reservoir.

  15. Spectral Components Analysis of Diffuse Emission Processes

    SciTech Connect

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  16. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  17. Principal component analysis for designed experiments

    PubMed Central

    2015-01-01

    Background Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. Results The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the

  18. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  19. Scaling in ANOVA-simultaneous component analysis.

    PubMed

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  20. Improving the use of principal component analysis to reduce physiological noise and motion artifacts to increase the sensitivity of task-based fMRI.

    PubMed

    Soltysik, David A; Thomasson, David; Rajan, Sunder; Biassou, Nadia

    2015-02-15

    Functional magnetic resonance imaging (fMRI) time series are subject to corruption by many noise sources, especially physiological noise and motion. Researchers have developed many methods to reduce physiological noise, including RETROICOR, which retroactively removes cardiac and respiratory waveforms collected during the scan, and CompCor, which applies principal components analysis (PCA) to remove physiological noise components without any physiological monitoring during the scan. We developed four variants of the CompCor method. The optimized CompCor method applies PCA to time series in a noise mask, but orthogonalizes each component to the BOLD response waveform and uses an algorithm to determine a favorable number of components to use as "nuisance regressors." Whole brain component correction (WCompCor) is similar, except that it applies PCA to time-series throughout the whole brain. Low-pass component correction (LCompCor) identifies low-pass filtered components throughout the brain, while high-pass component correction (HCompCor) identifies high-pass filtered components. We compared the new methods with the original CompCor method by examining the resulting functional contrast-to-noise ratio (CNR), sensitivity, and specificity. (1) The optimized CompCor method increased the CNR and sensitivity compared to the original CompCor method and (2) the application of WCompCor yielded the best improvement in the CNR and sensitivity. The sensitivity of the optimized CompCor, WCompCor, and LCompCor methods exceeded that of the original CompCor method. However, regressing noise signals showed a paradoxical consequence of reducing specificity for all noise reduction methods attempted. Published by Elsevier B.V.

  1. WE-G-18C-09: Separating Perfusion and Diffusion Components From Diffusion Weighted MRI of Rectum Tumors Based On Intravoxel Incoherent Motion (IVIM) Analysis

    SciTech Connect

    Tyagi, N; Wengler, K; Mazaheri, Y; Hunt, M; Deasy, J; Gollub, M

    2014-06-15

    Purpose: Pseudodiffusion arises from the microcirculation of blood in the randomly oriented capillary network and contributes to the signal decay acquired using a multi-b value diffusion weighted (DW)-MRI sequence. This effect is more significant at low b-values and should be properly accounted for in apparent diffusion coefficient (ADC) calculations. The purpose of this study was to separate perfusion and diffusion component based on a biexponential and a segmented monoexponential model using IVIM analysis Methods. The signal attenuation is modeled as S(b) = S0[(1−f)exp(−bD) + fexp(−bD*)]. Fitting the biexponetial decay leads to the quantification of D, the true diffusion coefficient, D*, the pseudodiffusion coefficient, and f, the perfusion fraction. A nonlinear least squares fit and two segmented monoexponential models were used to derive the values for D, D*,‘and f. In the segmented approach b = 200 s/mm{sup 2} was used as the cut-off value for calculation of D. DW-MRI's of a rectum cancer patient were acquired before chemotherapy, before radiation therapy (RT), and 4 weeks into RT and were investigated as an example case. Results: Mean ADC for the tumor drawn on the DWI cases was 0.93, 1.0 and 1.13 10{sup −3}×mm{sup 2}/s before chemotherapy, before RT and 4 weeks into RT. The mean (D.10{sup −3} × mm{sup 2}/s, D* 10{sup −3} × mm{sup 2}/s, and f %) based on biexponential fit was (0.67, 18.6, and 27.2%), (0.72, 17.7, and 28.9%) and (0.83,15.1, and 30.7%) at these time points. The mean (D, D* f) based on segmented fit was (0.72, 10.5, and 12.1%), (0.72, 8.2, and 17.4%) and (.82, 8.1, 16.5%) Conclusion: ADC values are typically higher than true diffusion coefficients. For tumors with significant perfusion effect, ADC should be analyzed at higher b-values or separated from the perfusion component. Biexponential fit overestimates the perfusion fraction because of increased sensitivity to noise at low b-values.

  2. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  3. Independent component analysis for automatic note extraction from musical trills.

    PubMed

    Brown, Judith C; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  4. Principal component analysis-T1ρ voxel based relaxometry of the articular cartilage: a comparison of biochemical patterns in osteoarthritis and anterior cruciate ligament subjects

    PubMed Central

    Russell, Colin; Randolph, Allison; Li, Xiaojuan; Majumdar, Sharmila

    2016-01-01

    Background Quantitative MR, including T1ρ mapping, has been extensively used to probe early biochemical changes in knee articular cartilage of subjects with osteoarthritis (OA) and others at risk for cartilage degeneration, such as those with anterior cruciate ligament (ACL) injury and reconstruction. However, limited studies have been performed aimed to assess the spatial location and patterns of T1ρ. In this study we used a novel voxel-based relaxometry (VBR) technique coupled with principal component analysis (PCA) to extract relevant features so as to describe regional patterns and to investigate their similarities and differences in T1ρ maps in subjects with OA and subjects six months after ACL reconstruction (ACLR). Methods T1ρ quantitative MRI images were collected for 180 subjects from two separate cohorts. The OA cohort included 93 osteoarthritic patients and 25 age-matched controls. The ACLR-6M cohort included 52 patients with unilateral ACL tears who were imaged 6 months after ACL reconstruction, and 10 age-matched controls. Non-rigid registration on a single template and local Z-score conversion were adopted for T1ρ spatial and intensity normalization of all the images in the dataset. PCA was used as a data dimensionality reduction to obtain a description of all subjects in a 10-dimensional feature space. Logistic linear regression was used to identify distinctive features of OA and ACL subjects Results Global prolongation of the Z-score was observed in both OA and ACL subjects compared to controls [higher values in 1st principal component (PC1); P=0.01]. In addition, relaxation time differences between superficial and deep cartilage layers of the lateral tibia and trochlea were observed to be significant distinctive features between OA and ACL subjects. OA subjects demonstrated similar values between the two cartilage layers [higher value in 2nd principal component (PC2); P=0.008], while ACL reconstructed subjects showed T1ρ prolongation

  5. Principal component analysis-T1ρ voxel based relaxometry of the articular cartilage: a comparison of biochemical patterns in osteoarthritis and anterior cruciate ligament subjects.

    PubMed

    Pedoia, Valentina; Russell, Colin; Randolph, Allison; Li, Xiaojuan; Majumdar, Sharmila

    2016-12-01

    Quantitative MR, including T1ρ mapping, has been extensively used to probe early biochemical changes in knee articular cartilage of subjects with osteoarthritis (OA) and others at risk for cartilage degeneration, such as those with anterior cruciate ligament (ACL) injury and reconstruction. However, limited studies have been performed aimed to assess the spatial location and patterns of T1ρ. In this study we used a novel voxel-based relaxometry (VBR) technique coupled with principal component analysis (PCA) to extract relevant features so as to describe regional patterns and to investigate their similarities and differences in T1ρ maps in subjects with OA and subjects six months after ACL reconstruction (ACLR). T1ρ quantitative MRI images were collected for 180 subjects from two separate cohorts. The OA cohort included 93 osteoarthritic patients and 25 age-matched controls. The ACLR-6M cohort included 52 patients with unilateral ACL tears who were imaged 6 months after ACL reconstruction, and 10 age-matched controls. Non-rigid registration on a single template and local Z-score conversion were adopted for T1ρ spatial and intensity normalization of all the images in the dataset. PCA was used as a data dimensionality reduction to obtain a description of all subjects in a 10-dimensional feature space. Logistic linear regression was used to identify distinctive features of OA and ACL subjects. Global prolongation of the Z-score was observed in both OA and ACL subjects compared to controls [higher values in 1(st) principal component (PC1); P=0.01]. In addition, relaxation time differences between superficial and deep cartilage layers of the lateral tibia and trochlea were observed to be significant distinctive features between OA and ACL subjects. OA subjects demonstrated similar values between the two cartilage layers [higher value in 2(nd) principal component (PC2); P=0.008], while ACL reconstructed subjects showed T1ρ prolongation specifically in the cartilage

  6. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  7. A Linear Structural Equation Model for Covert Verb Generation Based on Independent Component Analysis of fMRI Data from Children and Adolescents

    PubMed Central

    Karunanayaka, Prasanna; Schmithorst, Vincent J.; Vannest, Jennifer; Szaflarski, Jerzy P.; Plante, Elena; Holland, Scott K.

    2011-01-01

    Human language is a complex and protean cognitive ability. Young children, following well defined developmental patterns learn language rapidly and effortlessly producing full sentences by the age of 3 years. However, the language circuitry continues to undergo significant neuroplastic changes extending well into teenage years. Evidence suggests that the developing brain adheres to two rudimentary principles of functional organization: functional integration and functional specialization. At a neurobiological level, this distinction can be identified with progressive specialization or focalization reflecting consolidation and synaptic reinforcement of a network (Lenneberg, 1967; Muller et al., 1998; Berl et al., 2006). In this paper, we used group independent component analysis and linear structural equation modeling (McIntosh and Gonzalez-Lima, 1994; Karunanayaka et al., 2007) to tease out the developmental trajectories of the language circuitry based on fMRI data from 336 children ages 5–18 years performing a blocked, covert verb generation task. The results are analyzed and presented in the framework of theoretical models for neurocognitive brain development. This study highlights the advantages of combining both modular and connectionist approaches to cognitive functions; from a methodological perspective, it demonstrates the feasibility of combining data-driven and hypothesis driven techniques to investigate the developmental shifts in the semantic network. PMID:21660108

  8. The principle of exhaustiveness versus the principle of parsimony: a new approach for the identification of biomarkers from proteomic spot volume datasets based on principal component analysis.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Bobba, Marco; Gosetti, Fabio

    2010-05-01

    The field of biomarkers discovery is one of the leading research areas in proteomics. One of the most exploited approaches to this purpose consists of the identification of potential biomarkers from spot volume datasets produced by 2D gel electrophoresis. In this case, problems may arise due to the large number of spots present in each map and the small number of maps available for each class (control/pathological). Multivariate methods are therefore usually applied together with variable selection procedures, to provide a subset of potential candidates. The variable selection procedures available usually pursue the so-called principle of parsimony: the most parsimonious set of spots is selected, providing the best classification performances. This approach is not effective in proteomics since all potential biomarkers must be identified: not only the most discriminating spots, usually related to general responses to inflammatory events, but also the smallest differences and all redundant molecules, i.e. biomarkers showing similar behaviour. The principle of exhaustiveness should be pursued rather than parsimony. To solve this problem, a new ranking and classification method, "Ranking-PCA", based on principal component analysis and variable selection in forward search, is proposed here for the exhaustive identification of all possible biomarkers. The method is successfully applied to three different proteomic datasets to prove its effectiveness.

  9. Impact of parameter fluctuations on the performance of ethanol precipitation in production of Re Du Ning Injections, based on HPLC fingerprints and principal component analysis.

    PubMed

    Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei

    2016-01-01

    The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products.

  10. Practical Issues in Component Aging Analysis

    SciTech Connect

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  11. Structural analysis methods development for turbine hot section components

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1989-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine HOT Section Technoloogy (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  12. Structural Analysis Methods Development for Turbine Hot Section Components

    NASA Technical Reports Server (NTRS)

    Thompson, Robert L.

    1988-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine Hot Section Technology (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  13. Sparse Exponential Family Principal Component Analysis.

    PubMed

    Lu, Meng; Huang, Jianhua Z; Qian, Xiaoning

    2016-12-01

    We propose a Sparse exponential family Principal Component Analysis (SePCA) method suitable for any type of data following exponential family distributions, to achieve simultaneous dimension reduction and variable selection for better interpretation of the results. Because of the generality of exponential family distributions, the method can be applied to a wide range of applications, in particular when analyzing high dimensional next-generation sequencing data and genetic mutation data in genomics. The use of sparsity-inducing penalty helps produce sparse principal component loading vectors such that the principal components can focus on informative variables. By using an equivalent dual form of the formulated optimization problem for SePCA, we derive optimal solutions with efficient iterative closed-form updating rules. The results from both simulation experiments and real-world applications have demonstrated the superiority of our SePCA in reconstruction accuracy and computational efficiency over traditional exponential family PCA (ePCA), the existing Sparse PCA (SPCA) and Sparse Logistic PCA (SLPCA) algorithms.

  14. Differentiating malignant from benign breast tumors on acoustic radiation force impulse imaging using fuzzy-based neural networks with principle component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Hsiao-Chuan; Chou, Yi-Hong; Tiu, Chui-Mei; Hsieh, Chi-Wen; Liu, Brent; Shung, K. Kirk

    2017-03-01

    Many modalities have been developed as screening tools for breast cancer. A new screening method called acoustic radiation force impulse (ARFI) imaging was created for distinguishing breast lesions based on localized tissue displacement. This displacement was quantitated by virtual touch tissue imaging (VTI). However, VTIs sometimes express reverse results to intensity information in clinical observation. In the study, a fuzzy-based neural network with principle component analysis (PCA) was proposed to differentiate texture patterns of malignant breast from benign tumors. Eighty VTIs were randomly retrospected. Thirty four patients were determined as BI-RADS category 2 or 3, and the rest of them were determined as BI-RADS category 4 or 5 by two leading radiologists. Morphological method and Boolean algebra were performed as the image preprocessing to acquire region of interests (ROIs) on VTIs. Twenty four quantitative parameters deriving from first-order statistics (FOS), fractal dimension and gray level co-occurrence matrix (GLCM) were utilized to analyze the texture pattern of breast tumors on VTIs. PCA was employed to reduce the dimension of features. Fuzzy-based neural network as a classifier to differentiate malignant from benign breast tumors. Independent samples test was used to examine the significance of the difference between benign and malignant breast tumors. The area Az under the receiver operator characteristic (ROC) curve, sensitivity, specificity and accuracy were calculated to evaluate the performance of the system. Most all of texture parameters present significant difference between malignant and benign tumors with p-value of less than 0.05 except the average of fractal dimension. For all features classified by fuzzy-based neural network, the sensitivity, specificity, accuracy and Az were 95.7%, 97.1%, 95% and 0.964, respectively. However, the sensitivity, specificity, accuracy and Az can be increased to 100%, 97.1%, 98.8% and 0.985, respectively

  15. Principal component analysis of synthetic galaxy spectra

    NASA Astrophysics Data System (ADS)

    Ronen, Shai; Aragon-Salamanca, Alfonso; Lahav, Ofer

    1999-02-01

    We analyse synthetic galaxy spectra from the evolutionary models of Bruzual & Charlot and Fioc & Rocca-Volmerange using the method of principal component analysis (PCA). We explore synthetic spectra with different ages, star formation histories and metallicities, and identify the principal components (PCs) of variance in the spectra resulting from these different model parameters. The PCA provides a more objective and informative alternative to diagnostics by individual spectral lines. We discuss how the PCs can be used to estimate the input model parameters, and explore the impact of dust and noise in this inverse problem. We also discuss how changing the sampling of the ages and other model parameters affects the resulting PCs. Our first two synthetic PCs agree with a similar analysis on observed spectra obtained by Kennicutt and the 2dF redshift survey. We conclude that with a good enough signal-to-noise ratio (S/N> > 10) it is possible to derive age, star formation history and metallicity from observed galaxy spectra using PCA.

  16. Multivariate analysis of the volatile components in tobacco based on infrared-assisted extraction coupled to headspace solid-phase microextraction and gas chromatography-mass spectrometry.

    PubMed

    Yang, Yanqin; Pan, Yuanjiang; Zhou, Guojun; Chu, Guohai; Jiang, Jian; Yuan, Kailong; Xia, Qian; Cheng, Changhe

    2016-11-01

    A novel infrared-assisted extraction coupled to headspace solid-phase microextraction followed by gas chromatography with mass spectrometry method has been developed for the rapid determination of the volatile components in tobacco. The optimal extraction conditions for maximizing the extraction efficiency were as follows: 65 μm polydimethylsiloxane-divinylbenzene fiber, extraction time of 20 min, infrared power of 175 W, and distance between the infrared lamp and the headspace vial of 2 cm. Under the optimum conditions, 50 components were found to exist in all ten tobacco samples from different geographical origins. Compared with conventional water-bath heating and nonheating extraction methods, the extraction efficiency of infrared-assisted extraction was greatly improved. Furthermore, multivariate analysis including principal component analysis, hierarchical cluster analysis, and similarity analysis were performed to evaluate the chemical information of these samples and divided them into three classifications, including rich, moderate, and fresh flavors. The above-mentioned classification results were consistent with the sensory evaluation, which was pivotal and meaningful for tobacco discrimination. As a simple, fast, cost-effective, and highly efficient method, the infrared-assisted extraction coupled to headspace solid-phase microextraction technique is powerful and promising for distinguishing the geographical origins of the tobacco samples coupled to suitable chemometrics.

  17. Lattice Independent Component Analysis for Mobile Robot Localization

    NASA Astrophysics Data System (ADS)

    Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz

    This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).

  18. Face recognition using two-dimensional nonnegative principal component analysis

    NASA Astrophysics Data System (ADS)

    Ma, Peng; Yang, Dan; Ge, Yongxin; Zhang, Xiaohong; Qu, Ying

    2012-07-01

    Although two-dimensional principal component analysis (2DPCA) extracts image features directly from 2D image matrices rather than one dimensional vectors, 2DPCA is only based on the whole images to preserve total variances by maximizing the trace of feature covariance matrix. Thus, 2DPCA cannot extract localized components, which are usually important for face recognition. Inspired by nonnegative matrix factorization (NMF), which is based on localized features, we propose a novel algorithm for face recognition called two-dimensional nonnegative principal component analysis (2DNPCA) to extract localized components and maintain the maximal variance property of 2DPCA. 2DNPCA is a matrix-based algorithm to preserve the local structure of facial images and has the nonnegative constraint to learn localized components. Therefore, 2DNPCA has both advantages of 2DPCA and NMF. Furthermore, 2DNPCA solves the time-consuming problem by removing the restriction of minimizing the cost function and extracting only the base matrix. The nearest neighbor (NN) classifier and linear regression (LR) classifier are used for classification and extensive experimental results show that 2DNPCA plus NN and 2DNPCA plus LR are both very efficient approaches for face recognition.

  19. Compound fault diagnosis of gearboxes based on GFT component extraction

    NASA Astrophysics Data System (ADS)

    Ou, Lu; Yu, Dejie

    2016-11-01

    Compound fault diagnosis of gearboxes is of great importance to the long-term safe operation of rotating machines, and the key is to separate different fault components. In this paper, the path graph is introduced into the vibration signal analysis and the graph Fourier transform (GFT) of vibration signals are investigated from the graph spectrum domain. To better extract the fault components in gearboxes, a new adjacency weight matrix is defined and then the GFT of simulation signals of the gear and the bearing with localized faults are analyzed. Further, since the GFT graph spectrum of the gear fault component and the bearing fault component are mainly distributed in the low-order region and the high-order region, respectively, a novel method for the compound fault diagnosis of gearboxes based on GFT component extraction is proposed. In this method, the nonzero ratios, which are introduced to analyze the eigenvectors auxiliary, and the GFT of a gearbox vibration signal, are firstly calculated. Then, the order thresholds for reconstructed fault components are determined and the fault components are extracted. Finally, the Hilbert demodulation analyses are conducted. According to the envelope spectra of the fault components, the faults of the gear and the bearing can be diagnosed respectively. The performance of the proposed method is validated by the simulation data and the experiment signals from a gearbox with compound faults.

  20. Component analysis of a school-based substance use prevention program in Spain: contributions of problem solving and social skills training content.

    PubMed

    Espada, José P; Griffin, Kenneth W; Pereira, Juan R; Orgilés, Mireia; García-Fernández, José M

    2012-02-01

    The objective of the present research was to examine the contribution of two intervention components, social skills training and problem solving training, to alcohol- and drug-related outcomes in a school-based substance use prevention program. Participants included 341 Spanish students from age 12 to 15 who received the prevention program Saluda in one of four experimental conditions: full program, social skills condition, problem solving condition, and a wait-list control group. Students completed self-report surveys at the pretest, posttest and 12-month follow-up assessments. Compared to the wait-list control group, the three intervention conditions produced reductions in alcohol use and intentions to use other substances. The intervention effect size for alcohol use was greatest in magnitude for the full program with all components. Problem-solving skills measured at the follow-up were strongest in the condition that received the full program with all components. We discuss the implications of these findings, including the advantages and disadvantages of implementing tailored interventions to students by selecting intervention components after a skills-based needs assessment.

  1. Tensorial extensions of independent component analysis for multisubject FMRI analysis.

    PubMed

    Beckmann, C F; Smith, S M

    2005-03-01

    We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.

  2. To explain the variation of OGTT dynamics by biological mechanisms: a novel approach based on principal components analysis in women with history of GDM.

    PubMed

    Göbl, Christian S; Bozkurt, Latife; Mittlböck, Martina; Leutner, Michael; Yarragudi, Rajashri; Tura, Andrea; Pacini, Giovanni; Kautzky-Willer, Alexandra

    2015-07-01

    Early reexamination of carbohydrate metabolism via an oral glucose tolerance test (OGTT) is recommended after pregnancy with gestational diabetes (GDM). In this report, we aimed to assess the dominant patterns of dynamic OGTT measurements and subsequently explain them by meanings of the underlying pathophysiological processes. Principal components analysis (PCA), a statistical procedure that aims to reduce the dimensionality of multiple interrelated measures to a set of linearly uncorrelated variables (the principal components) was performed on OGTT data of glucose, insulin and C-peptide in addition to age and body mass index (BMI) of 151 women (n = 110 females after GDM and n = 41 controls) at 3-6 mo after delivery. These components were explained by frequently sampled intravenous glucose tolerance test (FSIGT) parameters. Moreover, their relation with the later development of overt diabetes was studied. Three principal components (PC) were identified, which explained 71.5% of the variation of the original 17 variables. PC1 (explained 47.1%) was closely related to postprandial OGTT levels and FSIGT-derived insulin sensitivity (r = 0.68), indicating that it mirrors insulin sensitivity in the skeletal muscle. PC2 (explained 17.3%) and PC3 (explained 7.1%) were shown to be associated with β-cell failure and fasting (i.e., hepatic) insulin resistance, respectively. All three components were related with diabetes progression (occurred in n = 25 females after GDM) and showed significant changes in long-term trajectories. A high amount of the postpartum OGTT data is explained by principal components, representing pathophysiological mechanisms on the pathway of impaired carbohydrate metabolism. Our results improve our understanding of the underlying biological processes to provide an accurate postgestational risk stratification. Copyright © 2015 the American Physiological Society.

  3. Resting-State Functional Connectivity by Independent Component Analysis-Based Markers Corresponds to Areas of Initial Seizure Propagation Established by Prior Modalities from the Hypothalamus

    PubMed Central

    Wilfong, Angus A.; Curry, Daniel J.

    2016-01-01

    Abstract The aims of this study were to evaluate a clinically practical functional connectivity (fc) protocol designed to blindly identify the corresponding areas of initial seizure propagation and also to differentiate these areas from remote secondary areas affected by seizure. The patients in this cohort had intractable epilepsy caused by intrahypothalamic hamartoma, which is the location of the ictal focus. The ictal propagation pathway is homogeneous and established, thus creating the optimum situation for the proposed method validation study. Twelve patients with seizures from hypothalamic hamartoma and six normal control patients underwent resting-state functional MRI, using independent component analysis (ICA) to identify network differences in patients. This was followed by seed-based connectivity measures to determine the extent of fc derangement between hypothalamus and these areas. The areas with significant change in connectivity were compared with the results of prior studies' modalities used to evaluate seizure propagation. The left amygdala-parahippocampal gyrus area, cingulate gyrus, and occipitotemporal gyrus demonstrated the highest derangement in connectivity with the hypothalamus, p < 0.01, corresponding to the initial seizure propagation areas established by prior modalities. Areas of secondary ictal propagation were differentiated from these initial locations by first being identified as an abnormal neuronal signal source through ICA, but did not show significant connectivity directly with the known ictal focus. Noninvasive connectivity measures correspond to areas of initial ictal propagation and differentiate such areas from secondary ictal propagation, which may aid in ictal focus surgical disconnection planning and support the use of this newer modality for adjunctive information in epilepsy surgery evaluation. PMID:27503346

  4. Multi-component separation and analysis of bat echolocation calls.

    PubMed

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  5. Deep overbite malocclusion: analysis of the underlying components.

    PubMed

    El-Dawlatly, Mostafa M; Fayed, Mona M Salah; Mostafa, Yehya A

    2012-10-01

    A deepbite malocclusion should not be approached as a disease entity; instead, it should be viewed as a clinical manifestation of underlying discrepancies. The aim of this study was to investigate the various skeletal and dental components of deep bite malocclusion, the significance of the contribution of each, and whether there are certain correlations between them. Dental and skeletal measurements were made on lateral cephalometric radiographs and study models of 124 patients with deepbite. These measurements were statistically analyzed. An exaggerated curve of Spee was the greatest shared dental component (78%), significantly higher than any other component (P = 0.0335). A decreased gonial angle was the greatest shared skeletal component (37.1%), highly significant compared with the other components (P = 0.0019). A strong positive correlation was found between the ramus/Frankfort horizontal angle and the gonial angle; weaker correlations were found between various components. An exaggerated curve of Spee and a decreased gonial angle were the greatest contributing components. This analysis of deepbite components could help clinicians design individualized mechanotherapies based on the underlying cause, rather than being biased toward predetermined mechanics when treating patients with a deepbite malocclusion. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  6. Discriminant Multitaper Component Analysis of EEG

    NASA Astrophysics Data System (ADS)

    Dyrholm, Mads; Sajda, Paul

    2011-06-01

    This work extends Bilinear Discriminant Component Analysis to the case of oscillatory activity with allowed phase-variability across trials. The proposed method learns a spatial profile together with a multitaper basis which can integrate oscillatory power in a band-limited fashion. We demonstrate the method for predicting the handedness of a subject's button press given multivariate EEG data. We show that our method learns multitapers sensitive to oscillatory activity in the 8-12 Hz range with spatial filters selective for lateralized motor cortex. This finding is consistent with the well-known mu-rhythm, whose power is known to modulate as a function of which hand a subject plans to move, and thus is expected to be discriminative (predictive) of the subject's response.

  7. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  8. A general ocean color atmospheric correction scheme based on principal components analysis: Part I. Performance on Case 1 and Case 2 waters

    NASA Astrophysics Data System (ADS)

    Gross-Colzy, Lydwine; Colzy, Stéphane; Frouin, Robert; Henry, Patrice

    2007-09-01

    In order to retrieve ocean color from satellite imagery, one must perform atmospheric correction, because when observed from space the ocean signature is weak compared with the strong atmospheric signal. The color of the ocean depends on its optically active constituents: water molecules, dissolved matter, and particulate matter. In the open ocean, the color is mainly due to water molecules and phytoplankton, whereas in the coastal zone, the color also results from the presence of sediments and colored dissolved organic matter. Because coastal waters (Case 2 waters) are much more difficult to decouple from the atmosphere than open ocean (Case 1 waters), operational atmospheric correction algorithms usually separate Case 1 from Case 2 waters processing. The solution proposed in this paper does not separate them. Our algorithm, referred to as Ocean Color Estimation by principal component ANalysis (OCEAN), exploits the fact that ocean is more variable spectrally than the atmosphere, while the atmosphere signal is more variable in magnitude. The satellite reflectance is first decomposed into principal components. The components sensitive to the ocean signal are then combined to retrieve the principal components of the marine reflectance via neural network methodology. The algorithm is described, and results are presented on real and simulated data for POLDER, MERIS, SeaWiFS, and MODIS. Accurate water reflectance estimates are obtained for various aerosol types and contents (including maritime, coastal and urban mixtures), and for the full range of water properties (resulting from realistic combinations of chlorophyll content, sediment content, and colored dissolved matter absorption).

  9. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  10. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle

  11. Differential-Private Data Publishing Through Component Analysis

    PubMed Central

    Jiang, Xiaoqian; Ji, Zhanglong; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Ohno-Machado, Lucila

    2013-01-01

    A reasonable compromise of privacy and utility exists at an “appropriate” resolution of the data. We proposed novel mechanisms to achieve privacy preserving data publishing (PPDP) satisfying ε-differential privacy with improved utility through component analysis. The mechanisms studied in this article are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The differential PCA-based PPDP serves as a general-purpose data dissemination tool that guarantees better utility (i.e., smaller error) compared to Laplacian and Exponential mechanisms using the same “privacy budget”. Our second mechanism, the differential LDA-based PPDP, favors data dissemination for classification purposes. Both mechanisms were compared with state-of-the-art methods to show performance differences. PMID:24409205

  12. Determination of lipophilicity of some non-steroidal anti-inflammatory agents and their relationships by using principal component analysis based on thin-layer chromatographic retention data.

    PubMed

    Sârbu, C; Todor, S

    1998-10-02

    The relative lipophilicity of ten non-steroidal anti-inflammatory agents have been determined by reversed-phase thin layer chromatography using different reversed-phase high-performance thin-layer chromatography plates and water-methanol mixtures as eluents. The compounds studied showed regular retention behavior, their RM values decreasing linearly with increasing concentration of methanol in the eluent. Principal component analysis allowed a more rational and objective estimation and comparison of lipophilicity determined by reversed-phase thin-layer chromatography. It also affords a useful graphical tool, since scatterplots of the scores onto the plane described by the first two components will have the effect of separating compounds from each other most effectively, thus obtaining "congeneric lipophilicity chart".

  13. Evaluation of ground water monitoring network by principal component analysis.

    PubMed

    Gangopadhyay, S; Gupta, A; Nachabe, M H

    2001-01-01

    Principal component analysis is a data reduction technique used to identify the important components or factors that explain most of the variance of a system. This technique was extended to evaluating a ground water monitoring network where the variables are monitoring wells. The objective was to identify monitoring wells that are important in predicting the dynamic variation in potentiometric head at a location. The technique is demonstrated through an application to the monitoring network of the Bangkok area. Principal component analysis was carried out for all the monitoring wells of the aquifer, and a ranking scheme based on the frequency of occurrence of a particular well as principal well was developed. The decision maker with budget constraints can now opt to monitor principal wells which can adequately capture the potentiometric head variation in the aquifer. This was evaluated by comparing the observed potentiometric head distribution using data from all available wells and wells selected using the ranking scheme as a guideline.

  14. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million.

    PubMed

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods.

  15. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  16. Recursive approach of EEG-segment-based principal component analysis substantially reduces cryogenic pump artifacts in simultaneous EEG-fMRI data.

    PubMed

    Kim, Hyun-Chul; Yoo, Seung-Schik; Lee, Jong-Hwan

    2015-01-01

    Electroencephalography (EEG) data simultaneously acquired with functional magnetic resonance imaging (fMRI) data are preprocessed to remove gradient artifacts (GAs) and ballistocardiographic artifacts (BCAs). Nonetheless, these data, especially in the gamma frequency range, can be contaminated by residual artifacts produced by mechanical vibrations in the MRI system, in particular the cryogenic pump that compresses and transports the helium that chills the magnet (the helium-pump). However, few options are available for the removal of helium-pump artifacts. In this study, we propose a recursive approach of EEG-segment-based principal component analysis (rsPCA) that enables the removal of these helium-pump artifacts. Using the rsPCA method, feature vectors representing helium-pump artifacts were successfully extracted as eigenvectors, and the reconstructed signals of the feature vectors were subsequently removed. A test using simultaneous EEG-fMRI data acquired from left-hand (LH) and right-hand (RH) clenching tasks performed by volunteers found that the proposed rsPCA method substantially reduced helium-pump artifacts in the EEG data and significantly enhanced task-related gamma band activity levels (p=0.0038 and 0.0363 for LH and RH tasks, respectively) in EEG data that have had GAs and BCAs removed. The spatial patterns of the fMRI data were estimated using a hemodynamic response function (HRF) modeled from the estimated gamma band activity in a general linear model (GLM) framework. Active voxel clusters were identified in the post-/pre-central gyri of motor area, only from the rsPCA method (uncorrected p<0.001 for both LH/RH tasks). In addition, the superior temporal pole areas were consistently observed (uncorrected p<0.001 for the LH task and uncorrected p<0.05 for the RH task) in the spatial patterns of the HRF model for gamma band activity when the task paradigm and movement were also included in the GLM. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach.

  18. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  19. Gene set analysis using variance component tests.

    PubMed

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  20. Gene set analysis using variance component tests

    PubMed Central

    2013-01-01

    Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107

  1. Incorporating principal component analysis into air quality ...

    EPA Pesticide Factsheets

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric – the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42−) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station–grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the prob

  2. Evaluating Model Misspecification in Independent Component Analysis

    PubMed Central

    Lee, Seonjoo; Caffo, Brian S.; Lakshmanan, Balaji; Pham, Dzung L.

    2014-01-01

    Independent component analysis (ICA) is a popular blind source separation technique used in many scientific disciplines. Current ICA approaches have focused on developing efficient algorithms under specific ICA models, such as instantaneous or convolutive mixing conditions, intrinsically assuming temporal independence or autocorrelation of the sources. In practice, the true model is not known and different ICA algorithms can produce very different results. Although it is critical to choose an ICA model, there has not been enough research done on evaluating mixing models and assumptions, and how the associated algorithms may perform under different scenarios. In this paper, we investigate the performance of multiple ICA algorithms under various mixing conditions. We also propose a convolutive ICA algorithm for echoic mixing cases. Our simulation studies show that the performance of ICA algorithms is highly dependent on mixing conditions and temporal independence of the sources. Most instantaneous ICA algorithms fail to separate autocorrelated sources, while convolutive ICA algorithms depend highly on the model specification and approximation accuracy of unmixing filters. PMID:25642002

  3. Nonlinear independent component analysis and multivariate time series analysis

    NASA Astrophysics Data System (ADS)

    Storck, Jan; Deco, Gustavo

    1997-02-01

    We derive an information-theory-based unsupervised learning paradigm for nonlinear independent component analysis (NICA) with neural networks. We demonstrate that under the constraint of bounded and invertible output transfer functions the two main goals of unsupervised learning, redundancy reduction and maximization of the transmitted information between input and output (Infomax-principle), are equivalent. No assumptions are made concerning the kind of input and output distributions, i.e. the kind of nonlinearity of correlations. An adapted version of the general NICA network is used for the modeling of multivariate time series by unsupervised learning. Given time series of various observables of a dynamical system, our net learns their evolution in time by extracting statistical dependencies between past and present elements of the time series. Multivariate modeling is obtained by making present value of each time series statistically independent not only from their own past but also from the past of the other series. Therefore, in contrast to univariate methods, the information lying in the couplings between the observables is also used and a detection of higher-order cross correlations is possible. We apply our method to time series of the two-dimensional Hénon map and to experimental time series obtained from the measurements of axial velocities in different locations in weakly turbulent Taylor-Couette flow.

  4. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with

  5. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  6. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  7. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  8. Reducing electrocardiographic artifacts from electromyogram signals with independent component analysis.

    PubMed

    Costa Junior, J D; Ferreira, D D; Nadal, J; Miranda de Sa, A L

    2010-01-01

    The aim of this work was to reduce ECG artifacts from surface electromyogram (EMG) signals collected from lumbar muscles with the blind source separation technique based on independent component analysis (ICA). Using four EMG signals collected above erector spinal lumbar muscles from 27 subjects, the proposed method fail in separating the sources. However, when considering a single channel of EMG and the same one time-shifted by one sample, the FastICA allowed reducing the signal to ECG noise ratio.

  9. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  10. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  11. Features of spatiotemporal groundwater head variation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Hsiao, Chin-Tsai; Chang, Liang-Cheng; Tsai, Jui-Pin; Chen, You-Cheng

    2017-04-01

    The effect of external stimuli on a groundwater system can be understood by examining the features of spatiotemporal head variations. However, the head variations caused by various external stimuli are mixed signals. To identify the stimuli features of head variations, we propose a systematic approach based on independent component analysis (ICA), frequency analysis, cross-correlation analysis, well-selection strategy, and hourly average head analysis. We also removed the head variations caused by regional stimuli (e.g., rainfall and river stage) from the original head variations of all the wells to better characterize the local stimuli features (e.g., pumping and tide). In the synthetic case study, the derived independent component (IC) features are more consistent with the features of the given recharge and pumping than the features derived from principle component analysis. In a real case study, the ICs associated with regional stimuli highly correlated with field observations, and the effect of regional stimuli on the head variation of all the wells was quantified. In addition, the tide, agricultural, industrial, and spring pumping features were characterized. Therefore, the developed method can facilitate understanding of the features of the spatiotemporal head variation and quantification of the effects of external stimuli on a groundwater system.

  12. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  13. Network component analysis: reconstruction of regulatory signals in biological systems.

    PubMed

    Liao, James C; Boscolo, Riccardo; Yang, Young-Lyeol; Tran, Linh My; Sabatti, Chiara; Roychowdhury, Vwani P

    2003-12-23

    High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifiability criteria. For networks that satisfy the criteria, the signals from the regulatory nodes and their strengths of influence on each output node can be faithfully reconstructed. This method is first validated experimentally by using the absorbance spectra of a network of various hemoglobin species. The method is then applied to microarray data generated from yeast Saccharamyces cerevisiae and the activities of various transcription factors during cell cycle are reconstructed by using recently discovered connectivity information for the underlying transcriptional regulatory networks.

  14. Analysis of failed nuclear plant components

    SciTech Connect

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  15. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  16. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  17. Derivative component analysis for mass spectral serum proteomic profiles

    PubMed Central

    2014-01-01

    Background As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. Methods In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Results Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based

  18. Craniofacial similarity analysis through sparse principal component analysis

    PubMed Central

    Zhao, Junli; Wu, Zhongke; Li, Jinhua; Deng, Qingqiong; Li, Xiaona; Zhou, Mingquan

    2017-01-01

    The computer-aided craniofacial reconstruction (CFR) technique has been widely used in the fields of criminal investigation, archaeology, anthropology and cosmetic surgery. The evaluation of craniofacial reconstruction results is important for improving the effect of craniofacial reconstruction. Here, we used the sparse principal component analysis (SPCA) method to evaluate the similarity between two sets of craniofacial data. Compared with principal component analysis (PCA), SPCA can effectively reduce the dimensionality and simultaneously produce sparse principal components with sparse loadings, thus making it easy to explain the results. The experimental results indicated that the evaluation results of PCA and SPCA are consistent to a large extent. To compare the inconsistent results, we performed a subjective test, which indicated that the result of SPCA is superior to that of PCA. Most importantly, SPCA can not only compare the similarity of two craniofacial datasets but also locate regions of high similarity, which is important for improving the craniofacial reconstruction effect. In addition, the areas or features that are important for craniofacial similarity measurements can be determined from a large amount of data. We conclude that the craniofacial contour is the most important factor in craniofacial similarity evaluation. This conclusion is consistent with the conclusions of psychological experiments on face recognition and our subjective test. The results may provide important guidance for three- or two-dimensional face similarity evaluation, analysis and face recognition. PMID:28640836

  19. Craniofacial similarity analysis through sparse principal component analysis.

    PubMed

    Zhao, Junli; Duan, Fuqing; Pan, Zhenkuan; Wu, Zhongke; Li, Jinhua; Deng, Qingqiong; Li, Xiaona; Zhou, Mingquan

    2017-01-01

    The computer-aided craniofacial reconstruction (CFR) technique has been widely used in the fields of criminal investigation, archaeology, anthropology and cosmetic surgery. The evaluation of craniofacial reconstruction results is important for improving the effect of craniofacial reconstruction. Here, we used the sparse principal component analysis (SPCA) method to evaluate the similarity between two sets of craniofacial data. Compared with principal component analysis (PCA), SPCA can effectively reduce the dimensionality and simultaneously produce sparse principal components with sparse loadings, thus making it easy to explain the results. The experimental results indicated that the evaluation results of PCA and SPCA are consistent to a large extent. To compare the inconsistent results, we performed a subjective test, which indicated that the result of SPCA is superior to that of PCA. Most importantly, SPCA can not only compare the similarity of two craniofacial datasets but also locate regions of high similarity, which is important for improving the craniofacial reconstruction effect. In addition, the areas or features that are important for craniofacial similarity measurements can be determined from a large amount of data. We conclude that the craniofacial contour is the most important factor in craniofacial similarity evaluation. This conclusion is consistent with the conclusions of psychological experiments on face recognition and our subjective test. The results may provide important guidance for three- or two-dimensional face similarity evaluation, analysis and face recognition.

  20. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    NASA Astrophysics Data System (ADS)

    de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru

    2005-10-01

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  1. Application of independent component analysis for beam diagnosis

    SciTech Connect

    Huang, X.; Lee, S.Y.; Prebys, Eric; Tomlin, Ray; /Fermilab

    2005-05-01

    The independent component analysis (ICA) is applied to analyze simultaneous multiple turn-by-turn beam position monitor (BPM) data of synchrotrons. The sampled data are decomposed to physically independent source signals, such as betatron motion, synchrotron motion and other perturbation sources. The decomposition is based on simultaneous diagonalization of several unequal time covariance matrices, unlike the model independent analysis (MIA), which uses equal-time covariance matrix only. Consequently the new method has advantage over MIA in isolating the independent modes and is more robust under the influence of contaminating signals of bad BPMs. The spatial pattern and temporal pattern of each resulting component (mode) can be used to identify and analyze the associated physical cause. Beam optics can be studied on the basis of the betatron modes. The method has been successfully applied to the Booster Synchrotron at Fermilab.

  2. Microcalorimeter pulse analysis by means of principle component decomposition

    NASA Astrophysics Data System (ADS)

    de Vries, C. P.; Schouten, R. M.; van der Kuur, J.; Gottardi, L.; Akamatsu, H.

    2016-07-01

    The X-ray integral field unit for the Athena mission consists of a microcalorimeter transition edge sensor pixel array. Incoming photons generate pulses which are analyzed in terms of energy, in order to assemble the X-ray spectrum. Usually this is done by means of optimal filtering in either time or frequency domain. In this paper we investigate an alternative method by means of principle component analysis. This method attempts to find the main components of an orthogonal set of functions to describe the data. We show, based on simulations, what the influence of various instrumental effects is on this type of analysis. We compare analyses both in time and frequency domain. Finally we apply these analyses on real data, obtained via frequency domain multiplexing readout.

  3. Independent Component Analysis of Nanomechanical Responses of Cantilever Arrays

    SciTech Connect

    Archibald, Richard K; Datskos, Panos G; Noid, Don W; Lavrik, Nickolay V

    2007-01-01

    The ability to detect and identify chemical and biological elements in air or liquid environments is of far reaching importance. Performing this task using technology that minimally impacts the perceived environment is the ultimate goal. The development of functionalized cantilever arrays with nanomechanical sensing is an important step towards this ambition. This report couples the feature extraction abilities of Independent Component Analysis (ICA) and the classification techniques of neural networks to analyze the signals produced by microcantilever-array-based nanomechanical sensors. The unique capabilities of this analysis unleash the potential of this sensing technology to accurately determine the identities and concentrations of the components of chemical mixtures. Furthermore, it is demonstrated that the knowledge of how the sensor array reacts to individual analytes in isolation is sufficient information to decode mixtures of analytes - a substantial benefit, significantly increasing the analytical utility of these sensing devices.

  4. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  5. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases.

    PubMed

    Sengur, Abdulkadir

    2008-03-01

    In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.

  6. Improved dependent component analysis for hyperspectral unmixing with spatial correlations

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Wan, Jianwei; Huang, Bingchao; Lan, Tian

    2014-11-01

    In highly mixed hyerspectral datasets, dependent component analysis (DECA) has shown its superiority over other traditional geometric based algorithms. This paper proposes a new algorithm that incorporates DECA with the infinite hidden Markov random field (iHMRF) model, which can efficiently exploit spatial dependencies between image pixels and automatically determine the number of classes. Expectation Maximization algorithm is derived to infer the model parameters, including the endmembers, the abundances, the dirichlet distribution parameters of each class and the classification map. Experimental results based on synthetic and real hyperspectral data show the effectiveness of the proposed algorithm.

  7. Analysis of complications after blood components' transfusions.

    PubMed

    Timler, Dariusz; Klepaczka, Jadwiga; Kasielska-Trojan, Anna; Bogusiak, Katarzyna

    2015-04-01

    Complications after blood components still constitute an important clinical problem and serve as limitation of liberal-transfusion strategy. The aim of the study was to present the 5-year incidence of early blood transfusions complications and to assess their relation to the type of the transfused blood components. 58,505 transfusions of blood components performed in the years 2006-2010 were retrospectively analyzed. Data concerning the amount of the transfused blood components and the numbers of adverse transfusion reactions reported to the Regional Blood Donation and Treatment Center (RBDTC) was collected. 95 adverse transfusion reactions were reportedto RBDTC 0.16% of alldonations (95/58 505) - 58 after PRBC transfusions, 28 after platelet concentrate transfusions and 9 after FFP transfusion. Febrile nonhemolytic and allergic reactions constitute respectively 36.8% and 30.5% of all complications. Nonhemolyticand allergic reactions are the most common complications of blood components transfusion and they are more common after platelet concentrate transfusions in comparison to PRBC and FFP donations.

  8. Agile Objects: Component-Based Inherent Survivability

    DTIC Science & Technology

    2003-12-01

    We design, implement, and develop a component middleware system which enables online application reconfiguration to enhance application...Objects project has accomplished a basic proof of concept of the key project ideas showing working systems that embody location independence and online ...migration, open real-time structures and pre-allocation of resources to enable rapid migration, online interface mutation for elusive interfaces, and

  9. Principal component analysis on chemical abundances spaces

    NASA Astrophysics Data System (ADS)

    Ting, Yuan-Sen; Freeman, Kenneth C.; Kobayashi, Chiaki; De Silva, Gayandhi M.; Bland-Hawthorn, Joss

    2012-04-01

    In preparation for the High Efficiency and Resolution Multi-Element Spectrograph (HERMES) chemical tagging survey of about a million Galactic FGK stars, we estimate the number of independent dimensions of the space defined by the stellar chemical element abundances [X/Fe]. This leads to a way to study the origin of elements from observed chemical abundances using principal component analysis. We explore abundances in several environments, including solar neighbourhood thin/thick disc stars, halo metal-poor stars, globular clusters, open clusters, the Large Magellanic Cloud and the Fornax dwarf spheroidal galaxy. By studying solar-neighbourhood stars, we confirm the universality of the r-process that tends to produce [neutron-capture elements/Fe] in a constant ratio. We find that, especially at low metallicity, the production of r-process elements is likely to be associated with the production of α-elements. This may support the core-collapse supernovae as the r-process site. We also verify the overabundances of light s-process elements at low metallicity, and find that the relative contribution decreases at higher metallicity, which suggests that this lighter elements primary process may be associated with massive stars. We also verify the contribution from the s-process in low-mass asymptotic giant branch (AGB) stars at high metallicity. Our analysis reveals two types of core-collapse supernovae: one produces mainly α-elements, the other produces both α-elements and Fe-peak elements with a large enhancement of heavy Fe-peak elements which may be the contribution from hypernovae. Excluding light elements that may be subject to internal mixing, K and Cu, we find that the [X/Fe] chemical abundance space in the solar neighbourhood has about six independent dimensions both at low metallicity (-3.5 ≲ [Fe/H] ≲-2) and high metallicity ([Fe/H] ≳-1). However the dimensions come from very different origins in these two cases. The extra contribution from low-mass AGB

  10. Mapping ash properties using principal components analysis

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  11. Autonomous radar pulse modulation classification using modulation components analysis

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Qiu, Zhaoyang; Zhu, Jun; Tang, Bin

    2016-12-01

    An autonomous method for recognizing radar pulse modulations based on modulation components analysis is introduced in this paper. Unlike the conventional automatic modulation classification methods which extract modulation features based on a list of known patterns, this proposed method classifies modulations by the existence of basic modulation components including continuous frequency modulations, discrete frequency codes and discrete phase codes in an autonomous way. A feasible way to realize this method is using the features of abrupt changes in the instantaneous frequency rate curve which derived by the short-term general representation of phase derivative. This method is suitable not only for the basic radar modulations but also for complicated and hybrid modulations. The theoretical result and two experiments demonstrate the effectiveness of the proposed method.

  12. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  13. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  14. Distributed Principal Component Analysis for Wireless Sensor Networks

    PubMed Central

    Le Borgne, Yann-Aël; Raybaud, Sylvain; Bontempi, Gianluca

    2008-01-01

    The Principal Component Analysis (PCA) is a data dimensionality reduction tech-nique well-suited for processing data from sensor networks. It can be applied to tasks like compression, event detection, and event recognition. This technique is based on a linear trans-form where the sensor measurements are projected on a set of principal components. When sensor measurements are correlated, a small set of principal components can explain most of the measurements variability. This allows to significantly decrease the amount of radio communication and of energy consumption. In this paper, we show that the power iteration method can be distributed in a sensor network in order to compute an approximation of the principal components. The proposed implementation relies on an aggregation service, which has recently been shown to provide a suitable framework for distributing the computation of a linear transform within a sensor network. We also extend this previous work by providing a detailed analysis of the computational, memory, and communication costs involved. A com-pression experiment involving real data validates the algorithm and illustrates the tradeoffs between accuracy and communication costs. PMID:27873788

  15. Effect of the components' interface on the synthesis of methanol over Cu/ZnO from CO2/H2: a microkinetic analysis based on DFT + U calculations.

    PubMed

    Tang, Qian-Lin; Zou, Wen-Tian; Huang, Run-Kun; Wang, Qi; Duan, Xiao-Xuan

    2015-03-21

    The elucidation of chemical reactions occurring on composite systems (e.g., copper (Cu)/zincite (ZnO)) from first principles is a challenging task because of their very large sizes and complicated equilibrium geometries. By combining the density functional theory plus U (DFT + U) method with microkinetic modeling, the present study has investigated the role of the phase boundary in CO2 hydrogenation to methanol over Cu/ZnO. The absence of hydrogenation locations created by the interface between the two catalyst components was revealed based on the calculated turnover frequency under realistic conditions, in which the importance of interfacial copper to provide spillover hydrogen for remote Cu(111) sites was stressed. Coupled with the fact that methanol production on the binary catalyst was recently believed to predominantly involve the bulk metallic surface, the spillover of interface hydrogen atoms onto Cu(111) facets facilitates the production process. The cooperative influence of the two different kinds of copper sites can be rationalized applying the Brönsted-Evans-Polanyi (BEP) relationship and allows us to find that the catalytic activity of ZnO-supported Cu catalysts is of volcano type with decrease in the particle size. Our results here may have useful implications in the future design of new Cu/ZnO-based materials for CO2 transformation to methanol.

  16. Technique analysis in elite athletes using principal component analysis.

    PubMed

    Gløersen, Øyvind; Myklebust, Håvard; Hallén, Jostein; Federolf, Peter

    2017-03-13

    The aim of this study was to advance current movement analysis methodology to enable a technique analysis in sports facilitating (1) concurrent comparison of the techniques between several athletes; (2) identification of potentially beneficial technique modifications and (3) a visual representation of the findings for feedback to the athletes. Six elite cross-country skiers, three world cup winners and three national elite, roller ski skated using the V2 technique on a treadmill while their movement patterns were recorded using 41 reflective markers. A principal component analysis performed on the marker positions resulted in multi-segmental "principal" movement components (PMs). A novel normalisation facilitated comparability of the PMs between athletes. Additionally, centre of mass (COM) trajectories were modelled. We found correlations between the athletes' performance levels (judged from race points) and specific features in the PMs and in the COM trajectories. Plausible links between COM trajectories and PMs were observed, suggesting that better performing skiers exhibited a different, possibly more efficient use of their body mass for propulsion. The analysis presented in the current study revealed specific technique features that appeared to relate to the skiers' performance levels. How changing these features would affect an individual athlete's technique was visualised with animated stick figures.

  17. Component Modal Analysis of a Folding Wing

    NASA Astrophysics Data System (ADS)

    Wang, Ivan

    This thesis explores the aeroelastic stability of a folding wing with an arbitrary number of wing segments. Simplifying assumptions are made such that it is possible to derive the equations of motion analytically. First, a general structural dynamics model based on beam theory is derived from a modal analysis using Lagrange's equations, and is used to predict the natural frequencies of different folding wing configurations. Next, the structural model is extended to an aeroelastic model by incorporating the effects of unsteady aerodynamic forces. The aeroelastic model is used to predict the flutter speed and flutter frequencies of folding wings. Experiments were conducted for three folding wing configurations---a two-segment wing, a three-segment wing, and a four-segment wing---and the outboard fold angle was varied over a wide range for each configuration. Very good agreement in both magnitude and overall trend was obtained between the theoretical and experimental structural natural frequencies, as well as the flutter frequency. For the flutter speed, very good agreement was obtained for the two-segment model, but the agreement worsens as the number of wing segments increases. Possible sources of error and attempts to improve correlation are described. Overall, the aeroelastic model predicts the general trends to good accuracy, offers some additional physical insight, and can be used to efficiently compute flutter boundaries and frequency characteristics for preliminary design or sensitivity studies.

  18. Analysis of truss, beam, frame, and membrane components. [composite structures

    NASA Technical Reports Server (NTRS)

    Knoell, A. C.; Robinson, E. Y.

    1975-01-01

    Truss components are considered, taking into account composite truss structures, truss analysis, column members, and truss joints. Beam components are discussed, giving attention to composite beams, laminated beams, and sandwich beams. Composite frame components and composite membrane components are examined. A description is given of examples of flat membrane components and examples of curved membrane elements. It is pointed out that composite structural design and analysis is a highly interactive, iterative procedure which does not lend itself readily to characterization by design or analysis function only.-

  19. Removing noise from MT data by using independent component analysis

    NASA Astrophysics Data System (ADS)

    Okuda, M.; Mogi, T.

    2016-12-01

    We carried out a MT survey in the Boso peninsula (Chiba, Central Japan) to investigate the resistivity structure of the area where the slow slip event have occurred at least five times within 20 years. Large artificial noise contaminated in the MT data and the resistivity and phase showed near field effect at the frequency band below 1Hz. To avoid the local noise, we attempted to apply the independent component analysis (ICA). ICA is one of the multivariate analysis methods and in which complicated data sets can be separated into all underlying sources without knowing these sources or the way that they are mixed. It assume that the mixing is liner, and yields the relation x(t)=As(t) with input signals x(t), mixing matrix A and source signal s(t). ICA has the ability to compute the matrix W (=A-1). In this study, we used the ICA programs for complex signals to deal with phase part in frequency-domain data. This is extension of FastICA algorithm which was introduced by Aapo and Hyvärinen (2000) and is based on a fixed-point iteration scheme to complex valued signals. We applied the ICA method to improve horizontal magnetic components in MT data using both the data observed in Boso area and the noise free magnetic data observed in Memanbetsu Branch of Geomagnetic Observatory. The observatory is located in eastern Hokkaido, Northern Japan, and apart from approximately 800km distance from the Boso area. After applying ICA, each component is not defined intensity scale. To extract noise free data in original data scale, kept the noise free component, and other noise components set to 0 and applied inverse matrix of W to obtain original x, i.e. x(t)=W-1u'(t), where u'(t): components vector after ICA, x(t): the original data vector. Secondly, we tried to improve the electric components in accordance with the noise free magnetic components. Finally, we calculated the apparent resistivity and phases using the data processed as above. In comparison between before and after

  20. Bioactivity of beaver castoreum constituents using principal components analysis.

    PubMed

    Schulte, B A; Müller-Schwarze, D; Tang, R; Webster, F X

    1995-07-01

    North American beaver (Castor canadensis) were observed to sniff from the water and make land visits to some synthetic chemical components of castoreum placed on experimental scent mounds (ESM). In previous analysis, the elicitation (presence/absence), completeness, and/or strength (number, duration) of these key responses served as separate measures of biological activity. In this paper, we used principal components analysis (PCA) to combine linearly six related measures of observed response and one index of overnight visitation calculated over all trials. The first principal component accounted for a majority of the variation and allowed ranking of the samples based on their composite bioactivity. A second PCA, based only on response trials (excluding trials with no responses), showed that responses to the synthetic samples, once elicited, did not vary greatly in completeness or strength. None of the samples evoked responses as complete or strong as the castoreum control. Castoreum also elicited more multiple land visits (repeated visits to the ESM by the same individual or by more than one family member) than the synthetic samples, indicating that an understanding of the castoreum chemosignal requires consideration of responses by the family unit, and not just the land visit by the initial responder.

  1. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  2. Quantitative analysis of planetary reflectance spectra with principal components analysis

    NASA Technical Reports Server (NTRS)

    Johnson, P. E.; Smith, M. O.; Adams, J. B.

    1985-01-01

    A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene. These endmembers were chosen from a similar analysis of 92 lunar soil and rock samples. The models fit the data to within 2 percent rms. It is found that the goodness of fit is marginally better for intimate mixing over macroscopic mixing.

  3. Quantitative analysis of planetary reflectance spectra with principal components analysis

    NASA Astrophysics Data System (ADS)

    Johnson, P. E.; Smith, M. O.; Adams, J. B.

    1985-02-01

    A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene. These endmembers were chosen from a similar analysis of 92 lunar soil and rock samples. The models fit the data to within 2 percent rms. It is found that the goodness of fit is marginally better for intimate mixing over macroscopic mixing.

  4. Two-component signal transduction in Agaricus bisporus: a comparative genomic analysis with other basidiomycetes through the web-based tool BASID2CS.

    PubMed

    Lavín, José L; García-Yoldi, Alberto; Ramírez, Lucía; Pisabarro, Antonio G; Oguiza, José A

    2013-06-01

    Two-component systems (TCSs) are signal transduction mechanisms present in many eukaryotes, including fungi that play essential roles in the regulation of several cellular functions and responses. In this study, we carry out a genomic analysis of the TCS proteins in two varieties of the white button mushroom Agaricus bisporus. The genomes of both A. bisporus varieties contain eight genes coding for TCS proteins, which include four hybrid Histidine Kinases (HKs), a single histidine-containing phosphotransfer (HPt) protein and three Response Regulators (RRs). Comparison of the TCS proteins among A. bisporus and the sequenced basidiomycetes showed a conserved core complement of five TCS proteins including the Tco1/Nik1 hybrid HK, HPt protein and Ssk1, Skn7 and Rim15-like RRs. In addition, Dual-HKs, unusual hybrid HKs with 2 HK and 2 RR domains, are absent in A. bisporus and are limited to various species of basidiomycetes. Differential expression analysis showed no significant up- or down-regulation of the Agaricus TCS genes in the conditions/tissue analyzed with the exception of the Skn7-like RR gene (Agabi_varbisH97_2|198669) that is significantly up-regulated on compost compared to cultured mycelia. Furthermore, the pipeline web server BASID2CS (http://bioinformatics.unavarra.es:1000/B2CS/BASID2CS.htm) has been specifically designed for the identification, classification and functional annotation of putative TCS proteins from any predicted proteome of basidiomycetes using a combination of several bioinformatic approaches. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Principal-component-based population structure adjustment in the North American Rheumatoid Arthritis Consortium data: impact of single-nucleotide polymorphism set and analysis method

    PubMed Central

    2009-01-01

    Population structure occurs when a sample is composed of individuals with different ancestries and can result in excess type I error in genome-wide association studies. Genome-wide principal-component analysis (PCA) has become a popular method for identifying and adjusting for subtle population structure in association studies. Using the Genetic Analysis Workshop 16 (GAW16) NARAC data, we explore two unresolved issues concerning the use of genome-wide PCA to account for population structure in genetic associations studies: the choice of single-nucleotide polymorphism (SNP) subset and the choice of adjustment model. We computed PCs for subsets of genome-wide SNPs with varying levels of LD. The first two PCs were similar for all subsets and the first three PCs were associated with case status for all subsets. When the PCs associated with case status were included as covariates in an association model, the reduction in genomic inflation factor was similar for all SNP sets. Several models have been proposed to account for structure using PCs, but it is not yet clear whether the different methods will result in substantively different results for association studies with individuals of European descent. We compared genome-wide association p-values and results for two positive-control SNPs previously associated with rheumatoid arthritis using four PC adjustment methods as well as no adjustment and genomic control. We found that in this sample, adjusting for the continuous PCs or adjusting for discrete clusters identified using the PCs adequately accounts for the case-control population structure, but that a recently proposed randomization test performs poorly. PMID:20017972

  6. Using surface electromyography (SEMG) to classify low back pain based on lifting capacity evaluation with principal component analysis neural network method.

    PubMed

    Hung, Chia-Chun; Shen, Tsu-Wang; Liang, Chung-Chao; Wu, Wen-Tien

    2014-01-01

    Low back pain (LBP) is a leading cause of disability. The population with low back pain is continuously growing in the recent years. This study tries to distinguish LBP patients with healthy subjects by using the objective surface electromyography (SEMG) as a quantitative score for clinical evaluations. There are 26 healthy and 26 low back pain subjects who involved in this research. They lifted different weights by static and dynamic lifting process. Multiple features are extracted from the raw SEMG data, including energy and frequency indexes. Moreover, false discovery rate (FDR) omitted the false positive features. Then, a principal component analysis neural network (PCANN) was used for classifications. The results showed the features with different loadings (including 30%, and 50% loading) on lifting which can be used for distinguishing healthy and back pain subjects. By using PCANN method, more than 80% accuracies are achieved when different lifting weights were applied. Moreover, it is correlated between some EMG features and clinical scales, on exertion, fatigue, and pain. This technology can be potentially used for the future researches as a computer-aid diagnosis tool of LBP evaluation.

  7. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension.

  8. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children

    PubMed Central

    Wassenburg, Stephanie I.; de Koning, Björn B.; de Vries, Meinou H.; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  9. Robust principal component analysis in water quality index development

    NASA Astrophysics Data System (ADS)

    Ali, Zalina Mohd; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan

    2014-06-01

    Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.

  10. Independent component analysis for artefact separation in astrophysical images.

    PubMed

    Funaro, Maria; Oja, Erkki; Valpola, Harri

    2003-01-01

    In this paper, we demonstrate that independent component analysis, a novel signal processing technique, is a powerful method for separating artefacts from astrophysical image data. When studying far-out galaxies from a series of consequent telescope images, there are several sources for artefacts that influence all the images, such as camera noise, atmospheric fluctuations and disturbances, cosmic rays, and stars in our own galaxy. In the analysis of astrophysical image data it is very important to implement techniques which are able to detect them with great accuracy, to avoid the possible physical events from being eliminated from the data along with the artefacts. For this problem, the linear ICA model holds very accurately because such artefacts are all theoretically independent of each other and of the physical events. Using image data on the M31 Galaxy, it is shown that several artefacts can be detected and recognized based on their temporal pixel luminosity profiles and independent component images. The obtained separation is good and the method is very fast. It is also shown that ICA outperforms principal component analysis in this task. For these reasons, ICA might provide a very useful pre-processing technique for the large amounts of available telescope image data.

  11. Experimental system and component performance analysis

    SciTech Connect

    Peterman, K.

    1984-10-01

    A prototype dye laser flow loop was constructed to flow test large power amplifiers in Building 169. The flow loop is designed to operate at supply pressures up to 900 psig and flow rates up to 250 GPM. During the initial startup of the flow loop experimental measurements were made to evaluate component and system performance. Three candidate dye flow loop pumps and three different pulsation dampeners were tested.

  12. Principal component analysis of shear strain effects.

    PubMed

    Chen, Hao; Varghese, Tomy

    2009-05-01

    Shear stresses are always present during quasi-static strain imaging, since tissue slippage occurs along the lateral and elevational directions during an axial deformation. Shear stress components along the axial deformation axes add to the axial deformation while perpendicular components introduce both lateral and elevational rigid motion and deformation artifacts into the estimated axial and lateral strain tensor images. A clear understanding of these artifacts introduced into the normal and shear strain tensor images with shear deformations is essential. In addition, signal processing techniques for improved depiction of the strain distribution is required. In this paper, we evaluate the impact of artifacts introduced due to lateral shear deformations on the normal strain tensors estimated by varying the lateral shear angle during an axial deformation. Shear strains are quantified using the lateral shear angle during the applied deformation. Simulation and experimental validation using uniformly elastic and single inclusion phantoms were performed. Variations in the elastographic signal-to-noise and contrast-to-noise ratios for axial deformations ranging from 0% to 5%, and for lateral deformations ranging from 0 to 5 degrees were evaluated. Our results demonstrate that the first and second principal component strain images provide higher signal-to-noise ratios of 20 dB with simulations and 10 dB under experimental conditions and contrast-to-noise ratio levels that are at least 20 dB higher when compared to the axial and lateral strain tensor images, when only lateral shear deformations are applied. For small axial deformations, the lateral shear deformations significantly reduces strain image quality, however the first principal component provides about a 1-2dB improvement over the axial strain tensor image. Lateral shear deformations also significantly increase the noise level in the axial and lateral strain tensor images with larger axial deformations

  13. Image denoising using principal component analysis in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bacchelli, Silvia; Papi, Serena

    2006-05-01

    In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.

  14. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    PubMed

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  15. Study on failure analysis of array chip components in IRFPA

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaonan; He, Yingjie; Li, Jinping

    2016-10-01

    Infrared focal plane array detector has advantages of strong anti-interference ability and high sensitivity. Its size, weight and power dissipation has been noticeably decreased compared to the conventional infrared imaging system. With the development of the detector manufacture technology and the cost reduction, IRFPA detector has been widely used in the military and commercial fields. Due to the restricting of array chip manufacturing process and material defects, the fault phenomenon such as cracking, bad pixel and abnormal output was showed during the test, which restricts the performance of the infrared detector imaging system, and these effects are gradually intensified with the expanding of the focal plane array size and the shrinking of the pixel size. Based on the analysis of the test results for the infrared detector array chip components, the fault phenomenon was classified. The main cause of the chip component failure is chip cracking, bad pixel and abnormal output. The reason of the failure has been analyzed deeply. According to analyze the mechanism of the failure, a series of measures which contain filtrating materials and optimizing the manufacturing process of array chip components were used to improve the performance of the chip components and the test pass rate, which is used to meet the needs of the detector performance.

  16. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  17. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  18. Component-Based Framework for Subsurface Simulations

    SciTech Connect

    Palmer, Bruce J.; Fang, Yilin; Hammond, Glenn E.; Gurumoorthi, Vidhya

    2007-08-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL. Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow.

  19. Component outage data analysis methods. Volume 2: Basic statistical methods

    NASA Astrophysics Data System (ADS)

    Marshall, J. A.; Mazumdar, M.; McCutchan, D. A.

    1981-08-01

    Statistical methods for analyzing outage data on major power system components such as generating units, transmission lines, and transformers are identified. The analysis methods produce outage statistics from component failure and repair data that help in understanding the failure causes and failure modes of various types of components. Methods for forecasting outage statistics for those components used in the evaluation of system reliability are emphasized.

  20. Columbia River Component Data Gap Analysis

    SciTech Connect

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  1. Component-based event composition modeling for CPS

    NASA Astrophysics Data System (ADS)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  2. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  3. Validation in principal components analysis applied to EEG data.

    PubMed

    Costa, João Carlos G D; Da-Silva, Paulo José G; Almeida, Renan Moritz V R; Infantosi, Antonio Fernando C

    2014-01-01

    The well-known multivariate technique Principal Components Analysis (PCA) is usually applied to a sample, and so component scores are subjected to sampling variability. However, few studies address their stability, an important topic when the sample size is small. This work presents three validation procedures applied to PCA, based on confidence regions generated by a variant of a nonparametric bootstrap called the partial bootstrap: (i) the assessment of PC scores variability by the spread and overlapping of "confidence regions" plotted around these scores; (ii) the use of the confidence regions centroids as a validation set; and (iii) the definition of the number of nontrivial axes to be retained for analysis. The methods were applied to EEG data collected during a postural control protocol with twenty-four volunteers. Two axes were retained for analysis, with 91.6% of explained variance. Results showed that the area of the confidence regions provided useful insights on the variability of scores and suggested that some subjects were not distinguishable from others, which was not evident from the principal planes. In addition, potential outliers, initially suggested by an analysis of the first principal plane, could not be confirmed by the confidence regions.

  4. Si-based RF MEMS components.

    SciTech Connect

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  5. Direct Numerical Simulation of Combustion Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Owoyele, Opeoluwa; Echekki, Tarek

    2016-11-01

    We investigate the potential of accelerating chemistry integration during the direct numerical simulation (DNS) of complex fuels based on the transport equations of representative scalars that span the desired composition space using principal component analysis (PCA). The transported principal components (PCs) offer significant potential to reduce the computational cost of DNS through a reduction in the number of transported scalars, as well as the spatial and temporal resolution requirements. The strategy is demonstrated using DNS of a premixed methane-air flame in a 2D vortical flow and is extended to the 3D geometry to further demonstrate the computational efficiency of PC transport. The PCs are derived from a priori PCA of a subset of the full thermo-chemical scalars' vector. The PCs' chemical source terms and transport properties are constructed and tabulated in terms of the PCs using artificial neural networks (ANN). Comparison of DNS based on a full thermo-chemical state and DNS based on PC transport based on 6 PCs shows excellent agreement even for species that are not included in the PCA reduction. The transported PCs reproduce some of the salient features of strongly curved and strongly strained flames. The 2D DNS results also show a significant reduction of two orders of magnitude in the computational cost of the simulations, which enables an extension of the PCA approach to 3D DNS under similar computational requirements. This work was supported by the National Science Foundation Grant DMS-1217200.

  6. Core Bioactive Components Promoting Blood Circulation in the Traditional Chinese Medicine Compound Xueshuantong Capsule (CXC) Based on the Relevance Analysis between Chemical HPLC Fingerprint and In Vivo Biological Effects

    PubMed Central

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  7. Core bioactive components promoting blood circulation in the traditional Chinese medicine compound xueshuantong capsule (CXC) based on the relevance analysis between chemical HPLC fingerprint and in vivo biological effects.

    PubMed

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  8. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  9. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  10. Balancing generality and specificity in component-based reuse

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Beck, Jon

    1992-01-01

    For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.

  11. Application of independent component analysis in face images: a survey

    NASA Astrophysics Data System (ADS)

    Huang, Yuchi; Lu, Hanqing

    2003-09-01

    Face technologies which can be applied to access control and surveillance, are essential to intelligent vision-based human computer interaction. The research efforts in this field include face detecting, face recognition, face retrieval, etc. However, these tasks are challenging because of variability in view point, lighting, pose and expression of human faces. The ideal face representation should consider the variability so as to we can develop robust algorithms for our applications. Independent Component Analysis (ICA) as an unsupervised learning technique has been used to find such a representation and obtained good performances in some applications. In the first part of this paper, we depict the models of ICA and its extensions: Independent Subspace Analysis (ISA) and Topographic ICA (TICA).Then we summaraize the process in the applications of ICA and its extension in Face images. At last we propose a promising direction for future research.

  12. Power analysis of principal components regression in genetic association studies.

    PubMed

    Shen, Yan-feng; Zhu, Jun

    2009-10-01

    Association analysis provides an opportunity to find genetic variants underlying complex traits. A principal components regression (PCR)-based approach was shown to outperform some competing approaches. However, a limitation of this method is that the principal components (PCs) selected from single nucleotide polymorphisms (SNPs) may be unrelated to the phenotype. In this article, we investigate the theoretical properties of such a method in more detail. We first derive the exact power function of the test based on PCR, and hence clarify the relationship between the test power and the degrees of freedom (DF). Next, we extend the PCR test to a general weighted PCs test, which provides a unified framework for understanding the properties of some related statistics. We then compare the performance of these tests. We also introduce several data-driven adaptive alternatives to overcome difficulties in the PCR approach. Finally, we illustrate our results using simulations based on real genotype data. Simulation study shows the risk of using the unsupervised rule to determine the number of PCs, and demonstrates that there is no single uniformly powerful method for detecting genetic variants.

  13. Eigencorneas: application of principal component analysis to corneal topography.

    PubMed

    Rodríguez, Pablo; Navarro, Rafael; Rozema, Jos J

    2014-11-01

    To determine the minimum number of orthonormal basis functions needed to accurately represent the great majority of corneal topographies from a normal population. Principal Component Analysis was applied to the elevation topographies of the anterior and posterior corneal surfaces and central thickness of 368 eyes of 184 healthy subjects. PCA was applied directly to the input elevation data points and after fitting them to Zernike polynomials (up to 8th order, 8 mm diameter). The anterior and posterior surfaces, as well as right eye and left eye data, were analysed both separately and jointly. A threshold based on the amount of explained variance (99%) was applied to determine the minimum number of basis functions (eigencorneas) or degrees of freedom (DoF) in the population. The eigenvectors directly obtained from elevation data resemble Zernike polynomials. The separate principal component analysis on the Zernike coefficients of anterior and posterior surfaces yielded 5 and 9 DoF, respectively. An additional reduction to 11 DoF (instead of 15 DoF) was achieved when performing a joint PCA that included both surfaces as well as central thickness. Finally, a further reduction was obtained by pooling right and left eye data together, to only 18 DoF. The combination of Zernike fit and Principal Component Analysis yields a strong reduction of dimensionality of elevation topography data, to only 19 independent parameters (18 DoF plus population average), which indicates a high degree of correlation existing between anterior and posterior surfaces, and between eyes. The resulting eigencorneas are especially well suited for practical applications, as they are uncorrelated and orthonormal linear combinations of Zernike polynomials. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  14. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  15. [Construction of hospital management indices using principal component analysis].

    PubMed

    Almenara-Barrios, José; García-Ortega, Cesáreo; González-Caballero, Juan Luis; Abellán-Hervás, María José

    2002-01-01

    To construct useful indices for hospital management, based on descriptive multivariate techniques. Data were collected during 1999 and 2000, on hospital admissions occurring during 1997-1998 at Hospital General de Algeciras, part of Servicio Andaluz de Salud (SAS) of Sistema Nacional de Salud Español (Spanish National Health Service). The following variables routinely monitored by health authorities were analyzed: number of admissions, mortality, number of re-admissions, number of outpatient consultations, case-mix index, number of stays, and functional index. Variables were measured in a total of 22486 admissions. We applied the Principal Components Analysis (PCA) method using the R correlation matrix. The first two components were selected, accounting cumulatively for 62.67% of the variability in the data. The first component represents a new index representing the number of attended persons, which we have termed Case Load. The second PC represents or explains the difficulty of the attended cases, which we have termed Case Complexity. These two indices are useful to classify hospital services.

  16. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  17. Advances in resonance based NDT for ceramic components

    NASA Astrophysics Data System (ADS)

    Hunter, L. J.; Jauriqui, L. M.; Gatewood, G. D.; Sisneros, R.

    2012-05-01

    The application of resonance based non-destructive testing methods has been providing benefit to manufacturers of metal components in the automotive and aerospace industries for many years. Recent developments in resonance based technologies are now allowing the application of resonance NDT to ceramic components including turbine engine components, armor, and hybrid bearing rolling elements. Application of higher frequencies and advanced signal interpretation are now allowing Process Compensated Resonance Testing to detect both internal material defects and surface breaking cracks in a variety of ceramic components. Resonance techniques can also be applied to determine material properties of coupons and to evaluate process capability for new manufacturing methods.

  18. Analysis of nuclear power plant component failures

    SciTech Connect

    Not Available

    1984-01-01

    Items are shown that have caused 90% of the nuclear unit outages and/or deratings between 1971 and 1980 and the magnitude of the problem indicated by an estimate of power replacement cost when the units are out of service or derated. The funding EPRI has provided on these specific items for R and D and technology transfer in the past and the funding planned in the future (1982 to 1986) are shown. EPRI's R and D may help the utilities on only a small part of their nuclear unit outage problems. For example, refueling is the major cause for nuclear unit outages or deratings and the steam turbine is the second major cause for nuclear unit outages; however, these two items have been ranked fairly low on the EPRI priority list for R and D funding. Other items such as nuclear safety (NRC requirements), reactor general, reactor and safety valves and piping, and reactor fuel appear to be receiving more priority than is necessary as determined by analysis of nuclear unit outage causes.

  19. EXTRACTING PRINCIPLE COMPONENTS FOR DISCRIMINANT ANALYSIS OF FMRI IMAGES.

    PubMed

    Liu, Jingyu; Xu, Lai; Caprihan, Arvind; Calhoun, Vince D

    2008-05-12

    This paper presents an approach for selecting optimal components for discriminant analysis. Such an approach is useful when further detailed analyses for discrimination or characterization requires dimensionality reduction. Our approach can accommodate a categorical variable such as diagnosis (e.g. schizophrenic patient or healthy control), or a continuous variable like severity of the disorder. This information is utilized as a reference for measuring a component's discriminant power after principle component decomposition. After sorting each component according to its discriminant power, we extract the best components for discriminant analysis. An application of our reference selection approach is shown using a functional magnetic resonance imaging data set in which the sample size is much less than the dimensionality. The results show that the reference selection approach provides an improved discriminant component set as compared to other approaches. Our approach is general and provides a solid foundation for further discrimination and classification studies.

  20. [Royal jelly: component efficiency, analysis, and standardisation].

    PubMed

    Oršolić, Nada

    2013-09-01

    Royal jelly is a viscous substance secreted by the hypopharyngeal and mandibular glands of worker honeybees (Apis mellifera) that contains a considerable amount of proteins, free amino acids, lipids, vitamins, sugars, and bioactive substances such as 10-hydroxy-trans-2-decenoic acid, antibacterial protein, and 350-kDa protein. These properties make it an attractive ingredient in various types of healthy foods. This article brings a brief review of the molecular mechanisms involved in the development of certain disorders that can be remedied by royal jelly, based on a selection of in vivo and in vitro studies. It also describes current understanding of the mechanisms and beneficial effects by which royal jelly helps to combat aging-related complications. Royal jelly has been reported to exhibit beneficial physiological and pharmacological effects in mammals, including vasodilative and hypotensive activities, antihypercholesterolemic activity, and antitumor activity. As its composition varies significantly (for both fresh and dehydrated samples), the article brings a few recommendations for defining new quality standards.

  1. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  2. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  3. Component-based assistants for MEMS design tools

    NASA Astrophysics Data System (ADS)

    Hahn, Kai; Brueck, Rainer; Schneider, Christian; Schumer, Christian; Popp, Jens

    2001-04-01

    With this paper a new approach for MEMS design tools will be introduced. An analysis of the design tool market leads to the result that most of the designers work with large and inflexible frameworks. Purchasing and maintaining these frameworks is expensive, and gives no optimum support for MEMS design process. The concept of design assistants, carried out with the concept of interacting software components, denotes a new generation of flexible, small, semi-autonomous software systems that are used to solve specific MEMS design tasks in close interaction with the designer. The degree of interaction depends on the complexity of the design task to be performed and the possibility to formalize the respective knowledge. In this context the Internet as one of today's most important communication media provides support for new tool concepts on the basis of the Java programming language. These modern technologies can be used to set up distributed and platform-independent applications. Thus the idea emerged to implement design assistants using Java. According to the MEMS design model new process sequences have to be defined new for every specific design object. As a consequence, assistants have to be built dynamically depending on the requirements of the design process, what can be achieved with component based software development. Componentware offers the possibility to realize design assistants, in areas like design rule checks, process consistency checks, technology definitions, graphical editors, etc. that may reside distributed over the Internet, communicating via Internet protocols. At the University of Siegen a directory for reusable MEMS components has been created, containing a process specification assistant and a layout verification assistant for lithography based MEMS technologies.

  4. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result.

  5. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  6. Key components of financial-analysis education for clinical nurses.

    PubMed

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses.

  7. Three component microseism analysis in Australia from deconvolution enhanced beamforming

    NASA Astrophysics Data System (ADS)

    Gal, Martin; Reading, Anya; Ellingsen, Simon; Koper, Keith; Burlacu, Relu; Tkalčić, Hrvoje; Gibbons, Steven

    2016-04-01

    Ocean induced microseisms in the range 2-10 seconds are generated in deep oceans and near coastal regions as body and surface waves. The generation of these waves can take place over an extended area and in a variety of geographical locations at the same time. It is therefore common to observe multiple arrivals with a variety of slowness vectors which leads to the desire to measure multiple arrivals accurately. We present a deconvolution enhanced direction of arrival algorithm, for single and 3 component arrays, based on CLEAN. The algorithm iteratively removes sidelobe contributions in the power spectrum, therefore improves the signal-to-noise ratio of weaker sources. The power level on each component (vertical, radial and transverse) can be accurately estimated as the beamformer decomposes the power spectrum into point sources. We first apply the CLEAN aided beamformer to synthetic data to show its performance under known conditions and then evaluate real (observed) data from a range of arrays with apertures between 10 and 70 km (ASAR, WRA and NORSAR) to showcase the improvement in resolution. We further give a detailed analysis of the 3 component wavefield in Australia including source locations, power levels, phase ratios, etc. by two spiral arrays (PSAR and SQspa). For PSAR the analysis is carried out in the frequency range 0.35-1Hz. We find LQ, Lg and fundamental and higher mode Rg wave phases. Additionally, we also observe the Sn phase. This is the first time this has been achieved through beamforming on microseism noise and underlines the potential for extra seismological information that can be extracted using the new implementation of CLEAN. The fundamental mode Rg waves are dominant in power for low frequencies and show equal power levels with LQ towards higher frequencies. Generation locations between Rg and LQ are mildly correlated for low frequencies and uncorrelated for higher frequencies. Results from SQspa will discuss lower frequencies around the

  8. Arthropod surveillance programs: Basic components, strategies, and analysis

    USDA-ARS?s Scientific Manuscript database

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthro...

  9. Principal component analysis of minimal excitatory postsynaptic potentials.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1998-02-20

    'Minimal' excitatory postsynaptic potentials (EPSPs) are often recorded from central neurones, specifically for quantal analysis. However the EPSPs may emerge from activation of several fibres or transmission sites so that formal quantal analysis may give false results. Here we extended application of the principal component analysis (PCA) to minimal EPSPs. We tested a PCA algorithm and a new graphical 'alignment' procedure against both simulated data and hippocampal EPSPs. Minimal EPSPs were recorded before and up to 3.5 h following induction of long-term potentiation (LTP) in CA1 neurones. In 29 out of 45 EPSPs, two (N=22) or three (N=7) components were detected which differed in latencies, rise time (Trise) or both. The detected differences ranged from 0.6 to 7.8 ms for the latency and from 1.6-9 ms for Trise. Different components behaved differently following LTP induction. Cases were found when one component was potentiated immediately after tetanus whereas the other with a delay of 15-60 min. The immediately potentiated component could decline in 1-2 h so that the two components contributed differently into early (< 1 h) LTP1 and later (1-4 h) LTP2 phases. The noise deconvolution techniques was applied to both conventional EPSP amplitudes and scores of separate components. Cases are illustrated when quantal size (upsilon) estimated from the EPSP amplitudes increased whereas upsilon estimated from the component scores was stable during LTP1. Analysis of component scores could show apparent double-fold increases in upsilon which are interpreted as reflections of synchronized quantal releases. In general, the results demonstrate PCA applicability to separate EPSPs into different components and its usefulness for precise analysis of synaptic transmission.

  10. Array Independent Component Analysis with Application to Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kukuyeva, Irina A.

    2012-11-01

    There are three ways to learn about an object: from samples taken directly from the site, from simulation studies based on its known scientific properties, or from remote sensing images. All three are carried out to study Earth and Mars. Our goal, however, is to learn about the second largest storm on Jupiter, called the White Oval, whose characteristics are unknown to this day. As Jupiter is a gas giant and hundreds of millions of miles away from Earth, we can only make inferences about the planet from retrieval algorithms and remotely sensed images. Our focus is to find latent variables from the remotely sensed data that best explain its underlying atmospheric structure. Principal Component Analysis (PCA) is currently the most commonly employed technique to do so. For a data set with more than two modes, this approach fails to account for all of the variable interactions, especially if the distribution of the variables is not multivariate normal; an assumption that is rarely true of multispectral images. The thesis presents an overview of PCA along with the most commonly employed decompositions in other fields: Independent Component Analysis, Tucker-3 and CANDECOMP/PARAFAC and discusses their limitations in finding unobserved, independent structures in a data cube. We motivate the need for a novel dimension reduction technique that generalizes existing decompositions to find latent, statistically independent variables for one side of a multimodal (number of modes greater than two) data set while accounting for the variable interactions with its other modes. Our method is called Array Independent Component Analysis (AICA). As the main question of any decomposition is how to select a small number of latent variables that best capture the structure in the data, we extend the heuristic developed by Ceulemans and Kiers in [10] to aid in model selection for the AICA framework. The effectiveness of each dimension reduction technique is determined by the degree of

  11. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    SciTech Connect

    Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  12. A flexible framework for sparse simultaneous component based data integration

    PubMed Central

    2011-01-01

    1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are

  13. [Analysis of three-dimensional fluorescence overlapping spectra using differential spectra and independent component analysis].

    PubMed

    Yu, Shao-Hui; Zhang, Yu-Jun; Zhao, Nan-Jing; Xiao, Xue; Wang, Huan-Bo; Yin, Gao-Fang

    2013-01-01

    The analysis of multi-component three-dimensional fluorescence overlapping spectra is always very difficult. In view of the advantage of differential spectra and based on the calculation principle of two-dimensional differential spectra, the three-dimensional fluorescence spectra with both excitation and emission spectra is fully utilized. Firstly, the excitation differential spectra and emission differential spectra are respectively computed after unfolding the three-dimensional fluorescence spectra. Then the excitation differential spectra and emission differential spectra of the single component are obtained by analyzing the multicomponent differential spectra using independent component analysis. In this process, the use of cubic spline increases the data points of excitation spectra, and the roughness penalty smoothing reduces the noise of emission spectra which is beneficial for the computation of differential spectra. The similarity indices between the standard spectra and recovered spectra show that independent component analysis based on differential spectra is more suitable for the component recognition of three-dimensional fluorescence overlapping spectra.

  14. Using independent component analysis for material estimation in hyperspectral images.

    PubMed

    Kuan, Chia-Yun; Healey, Glenn

    2004-06-01

    We develop a method for automated material estimation in hyperspectral images. The method models a hyperspectral pixel as a linear mixture of unknown materials. The method is particularly useful for applications in which material regions in a scene are smaller than one pixel. In contrast to many material estimation methods, the new method uses the statistics of large numbers of pixels rather than attempting to identify a small number of the purest pixels. The method is based on maximizing the independence of material abundances at each pixel. We show how independent component analysis algorithms can be adapted for use with this problem. We demonstrate properties of the method by application to airborne hyperspectral data.

  15. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  16. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  17. Using Dynamic Master Logic Diagram for component partial failure analysis

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    A methodology using the Dynamic Master Logic Diagram (DMLD) for the evaluation of component partial failure is presented. Since past PRAs have not focused on partial failure effects, the reliability of components are only based on the binary state assumption, i.e. defining a component as fully failed or functioning. This paper is to develop an approach to predict and estimate the component partial failure on the basis of the fuzzy state assumption. One example of the application of this methodology with the reliability function diagram of a centrifugal pump is presented.

  18. Reliability-based robust design optimization of vehicle components, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based design optimization, the reliability sensitivity analysis and robust design method are employed to present a practical and effective approach for reliability-based robust design optimization of vehicle components. A procedure for reliability-based robust design optimization of vehicle components is proposed. Application of the method is illustrated by reliability-based robust design optimization of axle and spring. Numerical results have shown that the proposed method can be trusted to perform reliability-based robust design optimization of vehicle components.

  19. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  20. Quantitative Analysis of Porosity and Transport Properties by FIB-SEM 3D Imaging of a Solder Based Sintered Silver for a New Microelectronic Component

    NASA Astrophysics Data System (ADS)

    Rmili, W.; Vivet, N.; Chupin, S.; Le Bihan, T.; Le Quilliec, G.; Richard, C.

    2016-04-01

    As part of development of a new assembly technology to achieve bonding for an innovative silicon carbide (SiC) power device used in harsh environments, the aim of this study is to compare two silver sintering profiles and then to define the best candidate for die attach material for this new component. To achieve this goal, the solder joints have been characterized in terms of porosity by determination of the morphological characteristics of the material heterogeneities and estimating their thermal and electrical transport properties. The three dimensional (3D) microstructure of sintered silver samples has been reconstructed using a focused ion beam scanning electron microscope (FIB-SEM) tomography technique. The sample preparation and the experimental milling and imaging parameters have been optimized in order to obtain a high quality of 3D reconstruction. Volume fractions and volumetric connectivity of the individual phases (silver and voids) have been determined. Effective thermal and electrical conductivities of the samples and the tortuosity of the silver phase have been also evaluated by solving the diffusive transport equation.

  1. Estimation of individual evoked potential components using iterative independent component analysis.

    PubMed

    Zouridakis, G; Iyer, D; Diaz, J; Patidar, U

    2007-09-07

    Independent component analysis (ICA) has been successfully employed in the study of single-trial evoked potentials (EPs). In this paper, we present an iterative temporal ICA methodology that processes multielectrode single-trial EPs, one channel at a time, in contrast to most existing methodologies which are spatial and analyze EPs from all recording channels simultaneously. The proposed algorithm aims at enhancing individual components in an EP waveform in each single trial, and relies on a dynamic template to guide EP estimation. To quantify the performance of this method, we carried out extensive analyses with artificial EPs, using different models for EP generation, including the phase-resetting and the classical additive-signal models, and several signal-to-noise ratios and EP component latency jitters. Furthermore, to validate the technique, we employed actual recordings of the auditory N100 component obtained from normal subjects. Our results with artificial data show that the proposed procedure can provide significantly better estimates of the embedded EP signals compared to plain averaging, while with actual EP recordings, the procedure can consistently enhance individual components in single trials, in all subjects, which in turn results in enhanced average EPs. This procedure is well suited for fast analysis of very large multielectrode recordings in parallel architectures, as individual channels can be processed simultaneously on different processors. We conclude that this method can be used to study the spatiotemporal evolution of specific EP components and may have a significant impact as a clinical tool in the analysis of single-trial EPs.

  2. Principal Component Analysis for pattern recognition in volcano seismic spectra

    NASA Astrophysics Data System (ADS)

    Unglert, Katharina; Jellinek, A. Mark

    2016-04-01

    Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.

  3. Acceleration of dynamic fluorescence molecular tomography with principal component analysis

    PubMed Central

    Zhang, Guanglei; He, Wei; Pu, Huangsheng; Liu, Fei; Chen, Maomao; Bai, Jing; Luo, Jianwen

    2015-01-01

    Dynamic fluorescence molecular tomography (FMT) is an attractive imaging technique for three-dimensionally resolving the metabolic process of fluorescent biomarkers in small animal. When combined with compartmental modeling, dynamic FMT can be used to obtain parametric images which can provide quantitative pharmacokinetic information for drug development and metabolic research. However, the computational burden of dynamic FMT is extremely huge due to its large data sets arising from the long measurement process and the densely sampling device. In this work, we propose to accelerate the reconstruction process of dynamic FMT based on principal component analysis (PCA). Taking advantage of the compression property of PCA, the dimension of the sub weight matrix used for solving the inverse problem is reduced by retaining only a few principal components which can retain most of the effective information of the sub weight matrix. Therefore, the reconstruction process of dynamic FMT can be accelerated by solving the smaller scale inverse problem. Numerical simulation and mouse experiment are performed to validate the performance of the proposed method. Results show that the proposed method can greatly accelerate the reconstruction of parametric images in dynamic FMT almost without degradation in image quality. PMID:26114027

  4. Acceleration of dynamic fluorescence molecular tomography with principal component analysis.

    PubMed

    Zhang, Guanglei; He, Wei; Pu, Huangsheng; Liu, Fei; Chen, Maomao; Bai, Jing; Luo, Jianwen

    2015-06-01

    Dynamic fluorescence molecular tomography (FMT) is an attractive imaging technique for three-dimensionally resolving the metabolic process of fluorescent biomarkers in small animal. When combined with compartmental modeling, dynamic FMT can be used to obtain parametric images which can provide quantitative pharmacokinetic information for drug development and metabolic research. However, the computational burden of dynamic FMT is extremely huge due to its large data sets arising from the long measurement process and the densely sampling device. In this work, we propose to accelerate the reconstruction process of dynamic FMT based on principal component analysis (PCA). Taking advantage of the compression property of PCA, the dimension of the sub weight matrix used for solving the inverse problem is reduced by retaining only a few principal components which can retain most of the effective information of the sub weight matrix. Therefore, the reconstruction process of dynamic FMT can be accelerated by solving the smaller scale inverse problem. Numerical simulation and mouse experiment are performed to validate the performance of the proposed method. Results show that the proposed method can greatly accelerate the reconstruction of parametric images in dynamic FMT almost without degradation in image quality.

  5. Sparse principal component analysis by choice of norm.

    PubMed

    Qi, Xin; Luo, Ruiyan; Zhao, Hongyu

    2013-02-01

    Recent years have seen the developments of several methods for sparse principal component analysis due to its importance in the analysis of high dimensional data. Despite the demonstration of their usefulness in practical applications, they are limited in terms of lack of orthogonality in the loadings (coefficients) of different principal components, the existence of correlation in the principal components, the expensive computation needed, and the lack of theoretical results such as consistency in high-dimensional situations. In this paper, we propose a new sparse principal component analysis method by introducing a new norm to replace the usual norm in traditional eigenvalue problems, and propose an efficient iterative algorithm to solve the optimization problems. With this method, we can efficiently obtain uncorrelated principal components or orthogonal loadings, and achieve the goal of explaining a high percentage of variations with sparse linear combinations. Due to the strict convexity of the new norm, we can prove the convergence of the iterative method and provide the detailed characterization of the limits. We also prove that the obtained principal component is consistent for a single component model in high dimensional situations. As illustration, we apply this method to real gene expression data with competitive results.

  6. Blind Extraction of an Exoplanetary Spectrum through Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.; Tinetti, G.; Deroo, P.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J.

    2013-03-01

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a "blind" analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of ~0.09 μm. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  7. Knowledge-guided gene ranking by coordinative component analysis.

    PubMed

    Wang, Chen; Xuan, Jianhua; Li, Huai; Wang, Yue; Zhan, Ming; Hoffman, Eric P; Clarke, Robert

    2010-03-30

    In cancer, gene networks and pathways often exhibit dynamic behavior, particularly during the process of carcinogenesis. Thus, it is important to prioritize those genes that are strongly associated with the functionality of a network. Traditional statistical methods are often inept to identify biologically relevant member genes, motivating researchers to incorporate biological knowledge into gene ranking methods. However, current integration strategies are often heuristic and fail to incorporate fully the true interplay between biological knowledge and gene expression data. To improve knowledge-guided gene ranking, we propose a novel method called coordinative component analysis (COCA) in this paper. COCA explicitly captures those genes within a specific biological context that are likely to be expressed in a coordinative manner. Formulated as an optimization problem to maximize the coordinative effort, COCA is designed to first extract the coordinative components based on a partial guidance from knowledge genes and then rank the genes according to their participation strengths. An embedded bootstrapping procedure is implemented to improve statistical robustness of the solutions. COCA was initially tested on simulation data and then on published gene expression microarray data to demonstrate its improved performance as compared to traditional statistical methods. Finally, the COCA approach has been applied to stem cell data to identify biologically relevant genes in signaling pathways. As a result, the COCA approach uncovers novel pathway members that may shed light into the pathway deregulation in cancers. We have developed a new integrative strategy to combine biological knowledge and microarray data for gene ranking. The method utilizes knowledge genes for a guidance to first extract coordinative components, and then rank the genes according to their contribution related to a network or pathway. The experimental results show that such a knowledge-guided strategy

  8. Principal Component Analysis and Cluster Analysis in Profile of Electrical System

    NASA Astrophysics Data System (ADS)

    Iswan; Garniwa, I.

    2017-03-01

    This paper propose to present approach for profile of electrical system, presented approach is combination algorithm, namely principal component analysis (PCA) and cluster analysis. Based on relevant data of gross domestic regional product and electric power and energy use. This profile is set up to show the condition of electrical system of the region, that will be used as a policy in the electrical system of spatial development in the future. This paper consider 24 region in South Sulawesi province as profile center points and use principal component analysis (PCA) to asses the regional profile for development. Cluster analysis is used to group these region into few cluster according to the new variable be produced PCA. The general planning of electrical system of South Sulawesi province can provide support for policy making of electrical system development. The future research can be added several variable into existing variable.

  9. An Analysis of Costs of Computer Based Training Hardware and Courseware Development for the Model Training Program for Reserve Component Units

    DTIC Science & Technology

    1986-08-01

    courseware development and delivery for medium- scale computer-ba^ed training efforts. The system will support up to forty MicroTICCIT’v workstations, each of...optimize courseware development and delivery for large- scale computer based training efforts Each configuration will support up to sixty-four...audio messages in coordination with visusl presentations. Audio can be uaed to supplement computer-generated at well aa video displaya

  10. Principal component analysis of International Ultraviolet Explorer galaxy spectra

    NASA Astrophysics Data System (ADS)

    Formiggini, Liliana; Brosch, Noah

    2004-05-01

    We analyse the UV spectral energy distribution of a sample of normal galaxies listed in the International Ultraviolet Explorer (IUE) Newly Extracted Spectra (INES) Guide No. 2 - Normal Galaxies using a principal component analysis. The sample consists of the IUE short-wavelength (SW) spectra of the central regions of 118 galaxies, where the IUE aperture included more than 1 per cent of the galaxy size. The principal components are associated with the main components observed in the ultraviolet (UV) spectra of galaxies. The first component, accounting for the largest source of diversity, may be associated with the UV continuum emission. The second component represents the UV contribution of an underlying evolved stellar population. The third component is sensitive to the amount of activity in the central regions of galaxies and measures the strength of star-formation events. In all the samples analysed here, the principal component representative of star-forming activity accounts for a significant percentage of the variance. The fractional contribution to the spectral energy distribution (SED) by the evolved stars and by the young population are similar. Projecting the SEDs on to their eigenspectra, we find that none of the coefficients of the principal components can outline an internal correlation or can correlate with the optical morphological types. In a subsample of 43 galaxies, consisting of almost only compact and BCD galaxies, the third principal component defines a sequence related to the degree of starburst activity of the galaxy.

  11. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    SciTech Connect

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-07-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  12. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    ERIC Educational Resources Information Center

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  13. A principal component analysis of transmission spectra of wine distillates

    NASA Astrophysics Data System (ADS)

    Rogovaya, M. V.; Sinitsyn, G. V.; Khodasevich, M. A.

    2014-11-01

    A chemometric method of decomposing multidimensional data into a small-sized space, the principal component method, has been applied to the transmission spectra of vintage Moldovan wine distillates. A sample of 42 distillates aged from four to 7 years from six producers has been used to show the possibility of identifying a producer in a two-dimensional space of principal components describing 94.5% of the data-matrix dispersion. Analysis of the loads into the first two principal components has shown that, in order to measure the optical characteristics of the samples under study using only two wavelengths, it is necessary to select 380 and 540 nm, instead of the standard 420 and 520 nm, to describe the variability of the distillates by one principal component or 370 and 520 nm to describe the variability by two principal components.

  14. Thermal hydraulic analysis of the TPX plasma facing components

    SciTech Connect

    Baxi, C.B.; Reis, E.E.; Redler, K.M.; Chin, E.E.; Boonstra, R.H.; Schaubel, K.M.; Anderson, P.M.; Hoffman, E.H.

    1995-12-31

    The purpose of the Tokamak Physics Experiment (TPX) is to develop and demonstrate steady state tokamak operating modes that can be extrapolated to reactor conditions. TPX will have a double null divertor with an option to operate in a single null mode. The maximum input power will be 45 MW and the pulse length will be 1,000 s. The major and minor radii will be 2.25 m and 0.5 m respectively. The material of plasma facing components (PFCs) will be carbon fiber composite (CFC). The plasma facing components (PFC) cooling will be provided by water at an inlet pressure of 2 MPa and inlet temperature of 50 C. The heat flux on the PFCs will be less than 0.2 MW/m{sup 2} on line of sight shields to 7.5 MW/m{sup 2} on divertor surfaces. The maximum allowable temperature on the divertor surface is 1,400 C and 600 C on all other PFCs. The attachment method, the type of CFC, the coolant flow velocity and the type of coolant channel is chosen based on the surface heat flux. In areas of highest heat flux, heat transfer augmentation will be used to obtain a safety margin of at least 2 on critical heat flux. An extensive thermal flow analysis has been performed to calculate the temperatures and pressure drops in the PFCs. A number of R and D programs are also in progress to verify the analysis and to obtain additional data when required. The total coolant flow rate requirement is estimated to be about 50 m{sup 3}/min (12,000 gpm) and the maximum pressure drop is estimated to be less than 1 MPa.

  15. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  16. Using Independent Component Analysis to Separate Signals in Climate Data

    SciTech Connect

    Fodor, I K; Kamath, C

    2003-01-28

    Global temperature series have contributions from different sources, such as volcanic eruptions and El Nino Southern Oscillation variations. We investigate independent component analysis as a technique to separate unrelated sources present in such series. We first use artificial data, with known independent components, to study the conditions under which ICA can separate the individual sources. We then illustrate the method with climate data from the National Centers for Environmental Prediction.

  17. Exploration of shape variation using localized components analysis.

    PubMed

    Alcantara, Dan A; Carmichael, Owen; Harcourt-Smith, Will; Sterner, Kirstin; Frost, Stephen R; Dutton, Rebecca; Thompson, Paul; Delson, Eric; Amenta, Nina

    2009-08-01

    Localized Components Analysis (LoCA) is a new method for describing surface shape variation in an ensemble of objects using a linear subspace of spatially localized shape components. In contrast to earlier methods, LoCA optimizes explicitly for localized components and allows a flexible trade-off between localized and concise representations, and the formulation of locality is flexible enough to incorporate properties such as symmetry. This paper demonstrates that LoCA can provide intuitive presentations of shape differences associated with sex, disease state, and species in a broad range of biomedical specimens, including human brain regions and monkey crania.

  18. Computer compensation for NMR quantitative analysis of trace components

    SciTech Connect

    Nakayama, T.; Fujiwara, Y.

    1981-07-22

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA.

  19. Parallel PDE-Based Simulations Using the Common Component Architecture

    SciTech Connect

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-03-05

    Summary. The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of componentbased software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and generalpurpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications.

  20. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  1. Critical Components of Effective School-Based Feeding Improvement Programs

    ERIC Educational Resources Information Center

    Bailey, Rita L.; Angell, Maureen E.

    2004-01-01

    This article identifies critical components of effective school-based feeding improvement programs for students with feeding problems. A distinction is made between typical school-based feeding management and feeding improvement programs, where feeding, independent functioning, and mealtime behaviors are the focus of therapeutic strategies.…

  2. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  3. Bonding and Integration Technologies for Silicon Carbide Based Injector Components

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.; Singh, Mrityunjay

    2008-01-01

    Advanced ceramic bonding and integration technologies play a critical role in the fabrication and application of silicon carbide based components for a number of aerospace and ground based applications. One such application is a lean direct injector for a turbine engine to achieve low NOx emissions. Ceramic to ceramic diffusion bonding and ceramic to metal brazing technologies are being developed for this injector application. For the diffusion bonding, titanium interlayers (PVD and foils) were used to aid in the joining of silicon carbide (SiC) substrates. The influence of such variables as surface finish, interlayer thickness (10, 20, and 50 microns), processing time and temperature, and cooling rates were investigated. Microprobe analysis was used to identify the phases in the bonded region. For bonds that were not fully reacted an intermediate phase, Ti5Si3Cx, formed that is thermally incompatible in its thermal expansion and caused thermal stresses and cracking during the processing cool-down. Thinner titanium interlayers and/or longer processing times resulted in stable and compatible phases that did not contribute to microcracking and resulted in an optimized microstructure. Tensile tests on the joined materials resulted in strengths of 13-28 MPa depending on the SiC substrate material. Non-destructive evaluation using ultrasonic immersion showed well formed bonds. For the joining technology of brazing Kovar fuel tubes to silicon carbide, preliminary development of the joining approach has begun. Various technical issues and requirements for the injector application are addressed.

  4. Abstract Interfaces for Data Analysis - Component Architecture for Data Analysis Tools

    SciTech Connect

    Barrand, Guy

    2002-08-20

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualization), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organization, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimizing re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. This paper gives an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  5. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots

    NASA Astrophysics Data System (ADS)

    Pereira, Jorge Costa; Azevedo, Julio Cesar R.; Knapik, Heloise G.; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  6. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots.

    PubMed

    Pereira, Jorge Costa; Azevedo, Julio Cesar R; Knapik, Heloise G; Burrows, Hugh Douglas

    2016-08-05

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  7. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  8. Spatial independent component analysis of functional brain optical imaging

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Pengcheng; Liu, Yadong; Luo, Weihua; Hu, Dewen; Luo, Qingming

    2003-12-01

    This paper introduces the algorithm and the basic theory of Independent Component Analysis (ICA), and discusses how to choose the proper ICA model of the data by the characteristics of the underlying signals to be estimated. The Spatial ICA (SICA) is applied to model and analysis of the data in the experiment when the signals and noises are spatially dependent. The data acquired from the intrinsic optical signals which are caused by electricity stimulation to sciatic nerve of rat are analyzed by SICA. In the result, the active-related component of the signals and its time course can be separate, and the signals of heartbeat and respiration also can be separated.

  9. An online incremental orthogonal component analysis method for dimensionality reduction.

    PubMed

    Zhu, Tao; Xu, Ye; Shen, Furao; Zhao, Jinxi

    2017-01-01

    In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective.

  10. A comparison of independent component analysis algorithms and measures to discriminate between