Science.gov

Sample records for component analysis based

  1. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  2. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  3. Quantitative interpretation of mineral hyperspectral images based on principal component analysis and independent component analysis methods.

    PubMed

    Jiang, Xiping; Jiang, Yu; Wu, Fang; Wu, Fenghuang

    2014-01-01

    Interpretation of mineral hyperspectral images provides large amounts of high-dimensional data, which is often complicated by mixed pixels. The quantitative interpretation of hyperspectral images is known to be extremely difficult when three types of information are unknown, namely, the number of pure pixels, the spectrum of pure pixels, and the mixing matrix. The problem is made even more complex by the disturbance of noise. The key to interpreting abstract mineral component information, i.e., pixel unmixing and abundance inversion, is how to effectively reduce noise, dimension, and redundancy. A three-step procedure is developed in this study for quantitative interpretation of hyperspectral images. First, the principal component analysis (PCA) method can be used to process the pixel spectrum matrix and keep characteristic vectors with larger eigenvalues. This can effectively reduce the noise and redundancy, which facilitates the abstraction of major component information. Second, the independent component analysis (ICA) method can be used to identify and unmix the pixels based on the linear mixed model. Third, the pure-pixel spectrums can be normalized for abundance inversion, which gives the abundance of each pure pixel. In numerical experiments, both simulation data and actual data were used to demonstrate the performance of our three-step procedure. Under simulation data, the results of our procedure were compared with theoretical values. Under the actual data measured from core hyperspectral images, the results obtained through our algorithm are compared with those of similar software (Mineral Spectral Analysis 1.0, Nanjing Institute of Geology and Mineral Resources). The comparisons show that our method is effective and can provide reference for quantitative interpretation of hyperspectral images.

  4. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  5. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    PubMed

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2016-12-22

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  6. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  7. Robust principal component analysis based on maximum correntropy criterion.

    PubMed

    He, Ran; Hu, Bao-Gang; Zheng, Wei-Shi; Kong, Xiang-Wei

    2011-06-01

    Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L(1) norm when outliers occur.

  8. Principal component analysis based methodology to distinguish protein SERS spectra

    NASA Astrophysics Data System (ADS)

    Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.

    2011-05-01

    Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.

  9. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  10. [A Composition Analysis Method of Mixed Pigments Based on Spectrum Expression and Independent Component Analysis].

    PubMed

    Wang, Gong-ming; Liu, Zhi-yong

    2015-06-01

    Reflectance spectrometry is a common method in composition analysis of mixed pigments. In this method, similarity is used to determine the type of basic pigments that constitute the mixed pigments. But its result may be inaccurate because it is easily influenced by a variety of basic pigments. In this study, a composition analysis method of mixed pigments based on spectrum expression and independent component analysis is proposed, and the composition of mixed pigments can be calculated accurately. First of all, the spectral information of mixed pigments is obtained with spectrometer, and is expressed as the discrete signal. After that, the spectral information of basic pigments is deduced with independent component analysis. Then, the types of basic pigments are determined by calculating the spectrum similarity between the basic pigments and known pigments. Finally, the ratios of basic pigments are obtained by solving the Kubelka-Munk equation system. In addition, the simulated spectrum data of Munsell color card is used to validate this method. The compositions of mixed pigments from three basic pigments are determined under the circumstance of normality and disturbance. And the compositions of mixture from several pigments within the set of eight basic pigments are deduced successfully. The curves of separated pigment spectrums are very similar to the curves of original pigment spectrums. The average similarity is 97.72%, and the maximum one can reach to 99.95%. The calculated ratios of basic pigments close to the original one. It can be seen that this method is suitable for composition analysis of mixed pigments.

  11. Formal Methods for Quality of Service Analysis in Component-Based Distributed Computing

    DTIC Science & Technology

    2003-12-01

    Component-Based Software Architecture is a promising solution for distributed computing . To develop high quality software, analysis of non-functional...based distributed computing is proposed and represented formally using Two-Level Grammar (TLG), an object-oriented formal specification language. TLG

  12. Identification of pure component spectra by independent component analysis in glucose prediction based on mid-infrared spectroscopy.

    PubMed

    Hahn, Sangjoon; Yoon, Gilwon

    2006-11-10

    We present a method for glucose prediction from mid-IR spectra by independent component analysis (ICA). This method is able to identify pure, or individual, absorption spectra of constituent components from the mixture spectra without a priori knowledge of the mixture. This method was tested with a two-component system consisting of an aqueous solution of both glucose and sucrose, which exhibit distinct but closely overlapped spectra. ICA combined with principal component analysis was able to identify a spectrum for each component, the correct number of components, and the concentrations of the components in the mixture. This method does not need a calibration process and is advantageous in noninvasive glucose monitoring since expensive and time-consuming clinical tests for data calibration are not required.

  13. Reduction of a collisional-radiative mechanism for argon plasma based on principal component analysis

    SciTech Connect

    Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.

    2015-06-15

    This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.

  14. Independent component feature-based human activity recognition via Linear Discriminant Analysis and Hidden Markov Model.

    PubMed

    Uddin, Md; Lee, J J; Kim, T S

    2008-01-01

    In proactive computing, human activity recognition from image sequences is an active research area. This paper presents a novel approach of human activity recognition based on Linear Discriminant Analysis (LDA) of Independent Component (IC) features from shape information. With extracted features, Hidden Markov Model (HMM) is applied for training and recognition. The recognition performance using LDA of IC features has been compared to other approaches including Principle Component Analysis (PCA), LDA of PC, and ICA. The preliminary results show much improved performance in the recognition rate with our proposed method.

  15. Dependent component analysis based approach to robust demarcation of skin tumors

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2009-02-01

    Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.

  16. Batch process monitoring based on multiple-phase online sorting principal component analysis.

    PubMed

    Lv, Zhaomin; Yan, Xuefeng; Jiang, Qingchao

    2016-09-01

    Existing phase-based batch or fed-batch process monitoring strategies generally have two problems: (1) phase number, which is difficult to determine, and (2) uneven length feature of data. In this study, a multiple-phase online sorting principal component analysis modeling strategy (MPOSPCA) is proposed to monitor multiple-phase batch processes online. Based on all batches of off-line normal data, a new multiple-phase partition algorithm is proposed, where k-means and a defined average Euclidean radius are employed to determine the multiple-phase data set and phase number. Principal component analysis is then applied to build the model in each phase, and all the components are retained. In online monitoring, the Euclidean distance is used to select the monitoring model. All the components undergo online sorting through a parameter defined by Bayesian inference (BI). The first several components are retained to calculate the T(2) statistics. Finally, the respective probability indices of [Formula: see text] is obtained using BI as the moving average strategy. The feasibility and effectiveness of MPOSPCA are demonstrated through a simple numerical example and the fed-batch penicillin fermentation process.

  17. Seislet-based morphological component analysis using scale-dependent exponential shrinkage

    NASA Astrophysics Data System (ADS)

    Yang, Pengliang; Fomel, Sergey

    2015-07-01

    Morphological component analysis (MCA) is a powerful tool used in image processing to separate different geometrical components (cartoons and textures, curves and points etc.). MCA is based on the observation that many complex signals may not be sparsely represented using only one dictionary/transform, however can have sparse representation by combining several over-complete dictionaries/transforms. In this paper we propose seislet-based MCA for seismic data processing. MCA algorithm is reformulated in the shaping-regularization framework. Successful seislet-based MCA depends on reliable slope estimation of seismic events, which is done by plane-wave destruction (PWD) filters. An exponential shrinkage operator unifies many existing thresholding operators and is adopted in scale-dependent shaping regularization to promote sparsity. Numerical examples demonstrate a superior performance of the proposed exponential shrinkage operator and the potential of seislet-based MCA in application to trace interpolation and multiple removal.

  18. Improved gene prediction by principal component analysis based autoregressive Yule-Walker method.

    PubMed

    Roy, Manidipa; Barman, Soma

    2016-01-10

    Spectral analysis using Fourier techniques is popular with gene prediction because of its simplicity. Model-based autoregressive (AR) spectral estimation gives better resolution even for small DNA segments but selection of appropriate model order is a critical issue. In this article a technique has been proposed where Yule-Walker autoregressive (YW-AR) process is combined with principal component analysis (PCA) for reduction in dimensionality. The spectral peaks of DNA signal are used to detect protein-coding regions based on the 1/3 frequency component. Here optimal model order selection is no more critical as noise is removed by PCA prior to power spectral density (PSD) estimation. Eigenvalue-ratio is used to find the threshold between signal and noise subspaces for data reduction. Superiority of proposed method over fast Fourier Transform (FFT) method and autoregressive method combined with wavelet packet transform (WPT) is established with the help of receiver operating characteristics (ROC) and discrimination measure (DM) respectively.

  19. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    PubMed

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  20. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  1. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  2. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  3. Image-based pupil plane characterization via principal component analysis for EUVL tools

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-03-01

    We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.

  4. Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study

    NASA Astrophysics Data System (ADS)

    Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas

    2009-02-01

    We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.

  5. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter.

  6. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  7. Independent component analysis-based source-level hyperlink analysis for two-person neuroscience studies

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-02-01

    Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.

  8. Aberration measurement based on principal component analysis of aerial images of optimized marks

    NASA Astrophysics Data System (ADS)

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo

    2014-10-01

    We propose an aberration measurement technique based on principal component analysis of aerial images of optimized marks (AMAI-OM). Zernike aberrations are retrieved using a linear relationship between the aerial image and Zernike coefficients. The linear relationship is composed of the principal components (PCs) and regression matrix. A centering process is introduced to compensate position offsets of the measured aerial image. A new test mark is designed in order to improve the centering accuracy and theoretical accuracy of aberration measurement together. The new test marks are composed of three spaces with different widths, and their parameters are optimized by using an accuracy evaluation function. The offsets of the measured aerial image are compensated in the centering process and the adjusted PC coefficients are obtained. Then the Zernike coefficients are calculated according to these PC coefficients using a least square method. The simulations using the lithography simulators PROLITH and Dr.LiTHO validate the accuracy of our method. Compared with the previous aberration measurement technique based on principal component analysis of aerial image (AMAI-PCA), the measurement accuracy of Zernike aberrations under the real measurement condition of the aerial image is improved by about 50%.

  9. Inverting geodetic time series with a principal component analysis-based inversion method

    NASA Astrophysics Data System (ADS)

    Kositsky, A. P.; Avouac, J.-P.

    2010-03-01

    The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.

  10. [Component analysis of complex mixed solution based on multidimensional diffuse reflectance spectroscopy].

    PubMed

    Li, Gang; Xiong, Chan; Zhao, Li-ying; Lin, Ling; Tong, Ying; Zhang, Bao-ju

    2012-02-01

    In the present paper, the authors proposed a method for component analysis of complex mixed solutions based on multidimensional diffuse reflectance spectroscopy by analyzing the information carried by spectrum signals from various optical properties of various components of the analyte. The experiment instrument was designed with supercontinuum laser source, the motorized precision translation stage and the spectrometer. The Intralipid-20% was taken as an analyte, and was diluted over a range of 1%-20% in distilled water. The diffuse reflectance spectrum signal was measured at 24 points within the distance of 1.5-13 mm (at an interval of 0.5 mm) above the incidence point. The partial least squares algorithm model was used to perform a modeling and forecasting analysis for the spectral analysis data collected from single-point and multi-point. The results showed that the most accurate calibration model was created by the spectral data acquired from the nearest 1-13 points above the incident point; the most accurate prediction model was created by the spectral signal acquired from the nearest 1-7 points above the incident point. It was proved that multidimensional diffuse reflectance spectroscopy can improve the spectral signal to noise ratio. Compared with the traditional spectrum technology using a single optical property such as absorbance or reflectance, this method increased the impact of scattering characteristics of the analyte. So the use of a variety of optical properties of the analytes can make an improvement of the accuracy of the modeling and forecasting, and also provide a basis for component analysis of the complex mixed solution based on multidimensional diffuse reflectance spectroscopy.

  11. Analysis of active components in Salvia miltiorrhiza injection based on vascular endothelial cell protection.

    PubMed

    Shen, Jie; Yang, Kai; Sun, Caihua; Zheng, Minxia

    2014-09-01

    Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI). HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties.

  12. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    PubMed

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations.

  13. NURBS-based isogeometric analysis for the computation of flows about rotating components

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Hughes, T. J. R.

    2008-12-01

    The ability of non-uniform rational B-splines (NURBS) to exactly represent circular geometries makes NURBS-based isogeometric analysis attractive for applications involving flows around and/or induced by rotating components (e.g., submarine and surface ship propellers). The advantage over standard finite element discretizations is that rotating components may be introduced into a stationary flow domain without geometric incompatibility. Although geometric compatibility is exactly achieved, the discretization of the flow velocity and pressure remains incompatible at the interface between the stationary and rotating subdomains. This incompatibility is handled by using a weak enforcement of the continuity of solution fields at the interface of the stationary and rotating subdomains.

  14. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  15. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  16. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  17. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  18. The use of principal component and cluster analysis to differentiate banana peel flours based on their starch and dietary fibre components.

    PubMed

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-08-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food.

  19. The Use of Principal Component and Cluster Analysis to Differentiate Banana Peel Flours Based on Their Starch and Dietary Fibre Components

    PubMed Central

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-01-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193

  20. Independent component analysis of instantaneous power-based fMRI.

    PubMed

    Zhong, Yuan; Zheng, Gang; Liu, Yijun; Lu, Guangming

    2014-01-01

    In functional magnetic resonance imaging (fMRI) studies using spatial independent component analysis (sICA) method, a model of "latent variables" is often employed, which is based on the assumption that fMRI data are linear mixtures of statistically independent signals. However, actual fMRI signals are nonlinear and do not automatically meet with the requirement of sICA. To provide a better solution to this problem, we proposed a novel approach termed instantaneous power based fMRI (ip-fMRI) for regularization of fMRI data. Given that the instantaneous power of fMRI signals is a scalar value, it should be a linear mixture that naturally satisfies the "latent variables" model. Based on our simulated data, the curves of accuracy and resulting receiver-operating characteristic curves indicate that the proposed approach is superior to the traditional fMRI in terms of accuracy and specificity by using sICA. Experimental results from human subjects have shown that spatial components of a hand movement task-induced activation reveal a brain network more specific to motor function by ip-fMRI than that by the traditional fMRI. We conclude that ICA decomposition of ip-fMRI may be used to localize energy signal changes in the brain and may have a potential to be applied to detection of brain activity.

  1. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  2. A multi-fault diagnosis method for sensor systems based on principle component analysis.

    PubMed

    Zhu, Daqi; Bai, Jie; Yang, Simon X

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time.

  3. A Multi-Fault Diagnosis Method for Sensor Systems Based on Principle Component Analysis

    PubMed Central

    Zhu, Daqi; Bai, Jie; Yang, Simon X.

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time. PMID:22315537

  4. Damage localization in linear-form structures based on sensitivity investigation for principal component analysis

    NASA Astrophysics Data System (ADS)

    Viet Ha, Nguyen; Golinval, Jean-Claude

    2010-10-01

    This paper addresses the problem of damage detection and localization in linear-form structures. Principal component analysis (PCA) is a popular technique for dynamic system investigation. The aim of the paper is to present a damage diagnosis method based on sensitivities of PCA results in the frequency domain. Starting from frequency response functions (FRFs) measured at different locations on the structure; PCA is performed to determine the main features of the signals. Sensitivities of principal directions obtained from PCA to structural parameters are then computed and inspected according to the location of sensors; their variation from the healthy state to the damaged state indicates damage locations. It is worth noting that damage localization is performed without the need of modal identification. Influences of some features as noise, choice of parameter and number of sensors are discussed. The efficiency and limitations of the proposed method are illustrated using numerical and real-world examples.

  5. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  6. Contact- and distance-based principal component analysis of protein dynamics

    SciTech Connect

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  7. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods.

    PubMed

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-15

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  8. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    NASA Astrophysics Data System (ADS)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  9. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    PubMed Central

    2010-01-01

    Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036

  10. Robust principal component analysis-based four-dimensional computed tomography.

    PubMed

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-07

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.

  11. A method for developing biomechanical response corridors based on principal component analysis.

    PubMed

    Sun, W; Jin, J H; Reed, M P; Gayzik, F S; Danelson, K A; Bass, C R; Zhang, J Y; Rupp, J D

    2016-10-03

    The standard method for specifying target responses for human surrogates, such as crash test dummies and human computational models, involves developing a corridor based on the distribution of a set of empirical mechanical responses. These responses are commonly normalized to account for the effects of subject body shape, size, and mass on impact response. Limitations of this method arise from the normalization techniques, which are based on the assumptions that human geometry linearly scales with size and in some cases, on simple mechanical models. To address these limitations, a new method was developed for corridor generation that applies principal component (PC) analysis to align response histories. Rather than use normalization techniques to account for the effects of subject size on impact response, linear regression models are used to model the relationship between PC features and subject characteristics. Corridors are generated using Monte Carlo simulation based on estimated distributions of PC features for each PC. This method is applied to pelvis impact force data from a recent series of lateral impact tests to develop corridor bounds for a group of signals associated with a particular subject size. Comparing to the two most common methods for response normalization, the corridors generated by the new method are narrower and better retain the features in signals that are related to subject size and body shape.

  12. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  13. Reconstruction of transcriptional regulatory networks by stability-based network component analysis.

    PubMed

    Chen, Xi; Xuan, Jianhua; Wang, Chen; Shajahan, Ayesha N; Riggins, Rebecca B; Clarke, Robert

    2013-01-01

    Reliable inference of transcription regulatory networks is a challenging task in computational biology. Network component analysis (NCA) has become a powerful scheme to uncover regulatory networks behind complex biological processes. However, the performance of NCA is impaired by the high rate of false connections in binding information. In this paper, we integrate stability analysis with NCA to form a novel scheme, namely stability-based NCA (sNCA), for regulatory network identification. The method mainly addresses the inconsistency between gene expression data and binding motif information. Small perturbations are introduced to prior regulatory network, and the distance among multiple estimated transcript factor (TF) activities is computed to reflect the stability for each TF's binding network. For target gene identification, multivariate regression and t-statistic are used to calculate the significance for each TF-gene connection. Simulation studies are conducted and the experimental results show that sNCA can achieve an improved and robust performance in TF identification as compared to NCA. The approach for target gene identification is also demonstrated to be suitable for identifying true connections between TFs and their target genes. Furthermore, we have successfully applied sNCA to breast cancer data to uncover the role of TFs in regulating endocrine resistance in breast cancer.

  14. SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components

    SciTech Connect

    Wendt, R

    2014-06-15

    Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research

  15. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  16. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising.

    PubMed

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods.

  17. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  18. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis

    PubMed Central

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  19. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods.

  20. Recursive principal components analysis.

    PubMed

    Voegtlin, Thomas

    2005-10-01

    A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.

  1. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    SciTech Connect

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  2. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  3. Cistanches identification based on fluorescent spectral imaging technology combined with principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Dong, Jia; Huang, Furong; Li, Yuanpeng; Xiao, Chi; Xian, Ruiyi; Ma, Zhiguo

    2015-03-01

    In this study, fluorescent spectral imaging technology combined with principal component analysis (PCA) and artificial neural networks (ANNs) was used to identify Cistanche deserticola, Cistanche tubulosa and Cistanche sinensis, which are traditional Chinese medicinal herbs. The fluorescence spectroscopy imaging system acquired the spectral images of 40 cistanche samples, and through image denoising, binarization processing to make sure the effective pixels. Furthermore, drew the spectral curves whose data in the wavelength range of 450-680 nm for the study. Then preprocessed the data by first-order derivative, analyzed the data through principal component analysis and artificial neural network. The results shows: Principal component analysis can generally distinguish cistanches, through further identification by neural networks makes the results more accurate, the correct rate of the testing and training sets is as high as 100%. Based on the fluorescence spectral imaging technique and combined with principal component analysis and artificial neural network to identify cistanches is feasible.

  4. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  5. Towards Zero Retraining for Myoelectric Control Based on Common Model Component Analysis.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Zhu, Xiangyang

    2016-04-01

    In spite of several decades of intense research and development, the existing algorithms of myoelectric pattern recognition (MPR) are yet to satisfy the criteria that a practical upper extremity prostheses should fulfill. This study focuses on the criterion of the short, or even zero subject training. Due to the inherent nonstationarity in surface electromyography (sEMG) signals, current myoelectric control algorithms usually need to be retrained daily during a multiple days' usage. This study was conducted based on the hypothesis that there exist some invariant characteristics in the sEMG signals when a subject performs the same motion in different days. Therefore, given a set of classifiers (models) trained on several days, it is possible to find common characteristics among them. To this end, we proposed to use common model component analysis (CMCA) framework, in which an optimized projection was found to minimize the dissimilarity among multiple models of linear discriminant analysis (LDA) trained using data from different days. Five intact-limbed subjects and two transradial amputee subjects participated in an experiment including six sessions of sEMG data recording, which were performed in six different days, to simulate the application of MPR over multiple days. The results demonstrate that CMCA has a significant better generalization ability with unseen data (not included in the training data), leading to classification accuracy improvement and increase of completion rate in a motion test simulation, when comparing with the baseline reference method. The results indicate that CMCA holds a great potential in the effort of developing zero retraining of MPR.

  6. SU-F-BRA-13: Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Based On Principle Component Analysis

    SciTech Connect

    Roper, J; Bradshaw, B; Godette, K; Schreibmann, E; Chanyavanich, V

    2015-06-15

    Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process and then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.

  7. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  8. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy.

  9. A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use

    PubMed Central

    Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien

    2016-01-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  10. Bio-inspired controller for a dexterous prosthetic hand based on Principal Components Analysis.

    PubMed

    Matrone, G; Cipriani, C; Secco, E L; Carrozza, M C; Magenes, G

    2009-01-01

    Controlling a dexterous myoelectric prosthetic hand with many degrees of freedom (DoFs) could be a very demanding task, which requires the amputee for high concentration and ability in modulating many different muscular contraction signals. In this work a new approach to multi-DoF control is proposed, which makes use of Principal Component Analysis (PCA) to reduce the DoFs space dimensionality and allow to drive a 15 DoFs hand by means of a 2 DoFs signal. This approach has been tested and properly adapted to work onto the underactuated robotic hand named CyberHand, using mouse cursor coordinates as input signals and a principal components (PCs) matrix taken from the literature. First trials show the feasibility of performing grasps using this method. Further tests with real EMG signals are foreseen.

  11. Principal Components Analysis Based Unsupervised Feature Extraction Applied to Gene Expression Analysis of Blood from Dengue Haemorrhagic Fever Patients

    PubMed Central

    Taguchi, Y-h.

    2017-01-01

    Dengue haemorrhagic fever (DHF) sometimes occurs after recovery from the disease caused by Dengue virus (DENV), and is often fatal. However, the mechanism of DHF has not been determined, possibly because no suitable methodologies are available to analyse this disease. Therefore, more innovative methods are required to analyse the gene expression profiles of DENV-infected patients. Principal components analysis (PCA)-based unsupervised feature extraction (FE) was applied to the gene expression profiles of DENV-infected patients, and an integrated analysis of two independent data sets identified 46 genes as critical for DHF progression. PCA using only these 46 genes rendered the two data sets highly consistent. The application of PCA to the 46 genes of an independent third data set successfully predicted the progression of DHF. A fourth in vitro data set confirmed the identification of the 46 genes. These 46 genes included interferon- and heme-biosynthesis-related genes. The former are enriched in binding sites for STAT1, STAT2, and IRF1, which are associated with DHF-promoting antibody-dependent enhancement, whereas the latter are considered to be related to the dysfunction of spliceosomes, which may mediate haemorrhage. These results are outcomes that other type of bioinformatic analysis could hardly achieve. PMID:28276456

  12. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  13. Metabolic distance estimation based on principle component analysis of metabolic turnover.

    PubMed

    Nakayama, Yasumune; Putri, Sastia P; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-09-01

    Visualization of metabolic dynamism is important for various types of metabolic studies including studies on optimization of bio-production processes and studies of metabolism-related diseases. Many methodologies have been developed for metabolic studies. Among these, metabolic turnover analysis (MTA) is often used to analyze metabolic dynamics. MTA involves observation of changes in the isotopomer ratio of metabolites over time following introduction of isotope-labeled substrates. MTA has several advantages compared with (13)C-metabolic flux analysis, including the diversity of applicable samples, the variety of isotope tracers, and the wide range of target pathways. However, MTA produces highly complex data from which mining useful information becomes difficult. For easy understanding of MTA data, a new approach was developed using principal component analysis (PCA). The resulting PCA score plot visualizes the metabolic distance, which is defined as distance between metabolites on the real metabolic map. And the score plot gives us some hints of interesting metabolism for further study. We used this method to analyze the central metabolism of Saccharomyces cerevisiae under moderated aerobic conditions, and time course data for 77 isotopomers of 14 metabolites were obtained. The PCA score plot for this dataset represented a metabolic map and indicated interesting phenomena such as activity of fumarate reductase under aerated condition. These findings show the importance of a multivariate analysis to MTA. In addition, because the approach is not biased, this method has potential application for analysis of less-studied pathways and organisms.

  14. Brain responses to emotional stimuli during breath holding and hypoxia: an approach based on the independent component analysis.

    PubMed

    Menicucci, Danilo; Artoni, Fiorenzo; Bedini, Remo; Pingitore, Alessandro; Passera, Mirko; Landi, Alberto; L'Abbate, Antonio; Sebastiani, Laura; Gemignani, Angelo

    2014-11-01

    Voluntary breath holding represents a physiological model of hypoxia. It consists of two phases of oxygen saturation dynamics: an initial slow decrease (normoxic phase) followed by a rapid drop (hypoxic phase) during which transitory neurological symptoms as well as slight impairment of integrated cerebral functions, such as emotional processing, can occur. This study investigated how breath holding affects emotional processing. To this aim we characterized the modulation of event-related potentials (ERPs) evoked by emotional-laden pictures as a function of breath holding time course. We recorded ERPs during free breathing and breath holding performed in air by elite apnea divers. We modeled brain responses during free breathing with four independent components distributed over different brain areas derived by an approach based on the independent component analysis (ICASSO). We described ERP changes during breath holding by estimating amplitude scaling and time shifting of the same components (component adaptation analysis). Component 1 included the main EEG features of emotional processing, had a posterior localization and did not change during breath holding; component 2, localized over temporo-frontal regions, was present only in unpleasant stimuli responses and decreased during breath holding, with no differences between breath holding phases; component 3, localized on the fronto-central midline regions, showed phase-independent breath holding decreases; component 4, quite widespread but with frontal prevalence, decreased in parallel with the hypoxic trend. The spatial localization of these components was compatible with a set of processing modules that affects the automatic and intentional controls of attention. The reduction of unpleasant-related ERP components suggests that the evaluation of aversive and/or possibly dangerous situations might be altered during breath holding.

  15. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  16. [Algae identification research based on fluorescence spectral imaging technology combined with cluster analysis and principal component analysis].

    PubMed

    Liang, Man; Huang, Fu-rong; He, Xue-jia; Chen, Xing-dan

    2014-08-01

    In order to explore rapid real-time algae detection methods, in the present study experiments were carried out to use fluorescence spectral imaging technology combined with a pattern recognition method for identification research of different types of algae. The fluorescence effect of algae samples is obvious during the detection. The fluorescence spectral imaging system was adopted to collect spectral images of 40 algal samples. Through image denoising, binarization processing and making sure the effective pixels, the spectral curves of each sample were drawn according to the spectral cube. The spectra in the 400-720 nm wavelength range were obtained. Then, two pattern recognition methods, i.e., hierarchical cluster analysis and principal component analysis, were used to process the spectral data. The hierarchical cluster analysis results showed that the Euclidean distance method and average weighted method were used to calculate the cluster distance between samples, and the samples could be correctly classified at a level of the distance L=2.452 or above, with an accuracy of 100%. The principal component analysis results showed that first-order derivative, second-order derivative, multiplicative scatter correction, standard normal variate and other pretreatments were carried out on raw spectral data, then principal component analysis was conducted, among which the identification effect after the second-order derivative pretreatment was shown to be the most effective, and eight types of algae samples were independently distributed in the principal component eigenspace. It was thus shown that it was feasible to use fluorescence spectral imaging technology combined with cluster analysis and principal component analysis for algae identification. The method had the characteristics of being easy to operate, fast and nondestructive.

  17. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%.

  18. [Fetal electrocardiogram extraction based on independent component analysis and quantum particle swarm optimizer algorithm].

    PubMed

    Du, Yanqin; Huang, Hua

    2011-10-01

    Fetal electrocardiogram (FECG) is an objective index of the activities of fetal cardiac electrophysiology. The acquired FECG is interfered by maternal electrocardiogram (MECG). How to extract the fetus ECG quickly and effectively has become an important research topic. During the non-invasive FECG extraction algorithms, independent component analysis(ICA) algorithm is considered as the best method, but the existing algorithms of obtaining the decomposition of the convergence properties of the matrix do not work effectively. Quantum particle swarm optimization (QPSO) is an intelligent optimization algorithm converging in the global. In order to extract the FECG signal effectively and quickly, we propose a method combining ICA and QPSO. The results show that this approach can extract the useful signal more clearly and accurately than other non-invasive methods.

  19. Online Handwritten Signature Verification Using Neural Network Classifier Based on Principal Component Analysis

    PubMed Central

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  20. Use of principal component analysis for differentiation of gelatine sources based on polypeptide molecular weights.

    PubMed

    Nur Azira, T; Che Man, Y B; Raja Mohd Hafidz, R N; Aina, M A; Amin, I

    2014-05-15

    The study was aimed to differentiate between porcine and bovine gelatines in adulterated samples by utilising sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) combined with principal component analysis (PCA). The distinct polypeptide patterns of 6 porcine type A and 6 bovine type B gelatines at molecular weight ranged from 50 to 220 kDa were studied. Experimental samples of raw gelatine were prepared by adding porcine gelatine in a proportion ranging from 5% to 50% (v/v) to bovine gelatine and vice versa. The method used was able to detect 5% porcine gelatine added to the bovine gelatine. There were no differences in the electrophoretic profiles of the jelly samples when the proteins were extracted with an acetone precipitation method. The simple approach employing SDS-PAGE and PCA reported in this paper may provide a useful tool for food authenticity issues concerning gelatine.

  1. Plant-wide process monitoring based on mutual information-multiblock principal component analysis.

    PubMed

    Jiang, Qingchao; Yan, Xuefeng

    2014-09-01

    Multiblock principal component analysis (MBPCA) methods are gaining increasing attentions in monitoring plant-wide processes. Generally, MBPCA assumes that some process knowledge is incorporated for block division; however, process knowledge is not always available. A new totally data-driven MBPCA method, which employs mutual information (MI) to divide the blocks automatically, has been proposed. By constructing sub-blocks using MI, the division not only considers linear correlations between variables, but also takes into account non-linear relations thereby involving more statistical information. The PCA models in sub-blocks reflect more local behaviors of process, and the results in all blocks are combined together by support vector data description. The proposed method is implemented on a numerical process and the Tennessee Eastman process. Monitoring results demonstrate the feasibility and efficiency.

  2. A Novel Principal Component Analysis-Based Acceleration Scheme for LES-ODT: An A Priori Study

    NASA Astrophysics Data System (ADS)

    Echekki, Tarek; Mirgolbabaei, Hessan

    2012-11-01

    A parameterization of the composition space based on principal component analysis (PCA) is proposed to represent the transport equations with the one-dimensional turbulence (ODT) solutions of a hybrid large-eddy simulation (LES) and ODT scheme. An a priori validation of the proposed approach is implemented based on stand-alone ODT solutions of the Sandia Flame F flame, which is characterized by different regimes of combustion starting with pilot stabilization, to extinction and reignition and self-stabilized combustion. The PCA analysis is carried out with a full set of the thermo-chemical scalars' vector as well as a subset of this vector. The subset is made up primarily of major species and temperature. The results show that the different regimes are reproduced using only three principal components for the thermo-chemical scalars based on the full and a subset of the thermo-chemical scalars' vector. Reproduction of the source term of the principal component represents a greater challenge. It is found that using the subset of the thermo-chemical scalars' vector both minor species and the first three principal components source terms are reasonably well predicted.

  3. Analysis Components Investigation Report

    DTIC Science & Technology

    2014-10-01

    rebuildin in the vicin the school. ation of N t allow the ment of In est" can be med intere nts from a , this comp ervices wh ion of NLP rest (as w cify...releva ch as possi ecialize in ion of NLP ranking) on uses Alfre t allows m will be used e that offer sis compon the terms i rns the te ument Freq...order in ponent con nts in orde ASSIFIED December 2 LOSED TO ANY P onent performed is process sis task per ts, or a sele e Solution document ion of NLP t

  4. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  5. A robust principal component analysis algorithm for EEG-based vigilance estimation.

    PubMed

    Shi, Li-Chen; Duan, Ruo-Nan; Lu, Bao-Liang

    2013-01-01

    Feature dimensionality reduction methods with robustness have a great significance for making better use of EEG data, since EEG features are usually high-dimensional and contain a lot of noise. In this paper, a robust principal component analysis (PCA) algorithm is introduced to reduce the dimension of EEG features for vigilance estimation. The performance is compared with that of standard PCA, L1-norm PCA, sparse PCA, and robust PCA in feature dimension reduction on an EEG data set of twenty-three subjects. To evaluate the performance of these algorithms, smoothed differential entropy features are used as the vigilance related EEG features. Experimental results demonstrate that the robustness and performance of robust PCA are better than other algorithms for both off-line and on-line vigilance estimation. The average RMSE (root mean square errors) of vigilance estimation was 0.158 when robust PCA was applied to reduce the dimensionality of features, while the average RMSE was 0.172 when standard PCA was used in the same task.

  6. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix.

    PubMed

    Gao, H; Wu, Y; Zhang, T; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-12-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent 'super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle.

  7. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix

    PubMed Central

    Gao, H; Zhang, T; Wu, Y; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-01-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent ‘super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle. PMID:24984606

  8. Quantitative analysis of multiple components based on liquid chromatography with mass spectrometry in full scan mode.

    PubMed

    Xu, Min Li; Li, Bao Qiong; Wang, Xue; Chen, Jing; Zhai, Hong Lin

    2016-08-01

    Although liquid chromatography with mass spectrometry in full scan mode can obtain all the signals simultaneously in a large range and low cost, it is rarely used in quantitative analysis due to several problems such as chromatographic drifts and peak overlap. In this paper, we propose a Tchebichef moment method for the simultaneous quantitative analysis of three active compounds in Qingrejiedu oral liquid based on three-dimensional spectra in full scan mode of liquid chromatography with mass spectrometry. After the Tchebichef moments were calculated directly from the spectra, the quantitative linear models for three active compounds were established by stepwise regression. All the correlation coefficients were more than 0.9978. The limits of detection and limits of quantitation were less than 0.11 and 0.49 μg/mL, respectively. The intra- and interday precisions were less than 6.54 and 9.47%, while the recovery ranged from 102.56 to 112.15%. Owing to the advantages of multi-resolution and inherent invariance properties, Tchebichef moments could provide favorable results even in the situation of peaks shifting and overlapping, unknown interferences and noise signals, so it could be applied to the analysis of three-dimensional spectra in full scan mode of liquid chromatography with mass spectrometry.

  9. Design and Analysis of a Novel Six-Component F/T Sensor based on CPM for Passive Compliant Assembly

    NASA Astrophysics Data System (ADS)

    Liang, Qiaokang; Zhang, Dan; Wang, Yaonan; Ge, Yunjian

    2013-10-01

    This paper presents the design and analysis of a six-component Force/Torque (F/T) sensor whose design is based on the mechanism of the Compliant Parallel Mechanism (CPM). The force sensor is used to measure forces along the x-, y-, and z-axis (Fx, Fy and Fz) and moments about the x-, y-, and z-axis (Mx, My and Mz) simultaneously and to provide passive compliance during parts handling and assembly. Particularly, the structural design, the details of the measuring principle and the kinematics are presented. Afterwards, based on the Design of Experiments (DOE) approach provided by the software ANSYS®, a Finite Element Analysis (FEA) is performed. This analysis is performed with the objective of achieving both high sensitivity and isotropy of the sensor. The results of FEA show that the proposed sensor possesses high performance and robustness.

  10. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    NASA Astrophysics Data System (ADS)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  11. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude.

  12. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, H.D.; Ayers, C.W.; Casada, D.A.

    1996-11-26

    A new instrument is described for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician. 4 figs.

  13. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  14. Design and Validation of a Morphing Myoelectric Hand Posture Controller Based on Principal Component Analysis of Human Grasping

    PubMed Central

    Segil, Jacob L.; Weir, Richard F. ff.

    2015-01-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately. PMID:23649286

  15. Multi-objective analysis of a component-based representation within an interactive evolutionary design system

    NASA Astrophysics Data System (ADS)

    Machwe, A. T.; Parmee, I. C.

    2007-07-01

    This article describes research relating to a user-centered evolutionary design system that evaluates both engineering and aesthetic aspects of design solutions during early-stage conceptual design. The experimental system comprises several components relating to user interaction, problem representation, evolutionary search and exploration and online learning. The main focus of the article is the evolutionary aspect of the system when using a single quantitative objective function plus subjective judgment of the user. Additionally, the manner in which the user-interaction aspect affects system output is assessed by comparing Pareto frontiers generated with and without user interaction via a multi-objective evolutionary algorithm (MOEA). A solution clustering component is also introduced and it is shown how this can improve the level of support to the designer when dealing with a complex design problem involving multiple objectives. Supporting results are from the application of the system to the design of urban furniture which, in this case, largely relates to seating design.

  16. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  17. Semi-blind independent component analysis of fMRI based on real-time fMRI system.

    PubMed

    Ma, Xinyue; Zhang, Hang; Zhao, Xiaojie; Yao, Li; Long, Zhiying

    2013-05-01

    Real-time functional magnetic resonance imaging (fMRI) is a type of neurofeedback tool that enables researchers to train individuals to actively gain control over their brain activation. Independent component analysis (ICA) based on data-driven model is seldom used in real-time fMRI studies due to large time cost, though it has been very popular to offline analysis of fMRI data. The feasibility of performing real-time ICA (rtICA) processing has been demonstrated by previous study. However, rtICA was only applied to analyze single-slice data rather than full-brain data. In order to improve the performance of rtICA, we proposed semi-blind real-time ICA (sb-rtICA) for our real-time fMRI system by adding regularization of certain estimated time courses using the experiment paradigm information to rtICA. Both simulated and real-time fMRI experiment were conducted to compare the two approaches. Results from simulated and real full-brain fMRI data demonstrate that sb-rtICA outperforms rtICA in robustness, computational time and spatial detection power. Moreover, in contrast to rtICA, the first component estimated by sb-rtICA tends to be the target component in more sliding windows.

  18. Component Based Electronic Voting Systems

    NASA Astrophysics Data System (ADS)

    Lundin, David

    An electronic voting system may be said to be composed of a number of components, each of which has a number of properties. One of the most attractive effects of this way of thinking is that each component may have an attached in-depth threat analysis and verification strategy. Furthermore, the need to include the full system when making changes to a component is minimised and a model at this level can be turned into a lower-level implementation model where changes can cascade to as few parts of the implementation as possible.

  19. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  20. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  1. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  2. EEG/fMRI fusion based on independent component analysis: integration of data-driven and model-driven methods.

    PubMed

    Lei, Xu; Valdes-Sosa, Pedro A; Yao, Dezhong

    2012-09-01

    Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) provide complementary noninvasive information of brain activity, and EEG/fMRI fusion can achieve higher spatiotemporal resolution than each modality separately. This focuses on independent component analysis (ICA)-based EEG/fMRI fusion. In order to appreciate the issues, we first describe the potential and limitations of the developed fusion approaches: fMRI-constrained EEG imaging, EEG-informed fMRI analysis, and symmetric fusion. We then outline some newly developed hybrid fusion techniques using ICA and the combination of data-/model-driven methods, with special mention of the spatiotemporal EEG/fMRI fusion (STEFF). Finally, we discuss the current trend in methodological development and the existing limitations for extrapolating neural dynamics.

  3. Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.

    PubMed

    Siuly, Siuly; Li, Yan

    2015-04-01

    The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.

  4. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  5. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  6. Bearing fault recognition method based on neighbourhood component analysis and coupled hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Hongchao; Yuan, Haodong

    2016-01-01

    Due to the important role rolling element bearings play in rotating machines, condition monitoring and fault diagnosis system should be established to avoid abrupt breakage during operation. Various features from time, frequency and time-frequency domain are usually used for bearing or machinery condition monitoring. In this study, NCA-based feature extraction (FE) approach is proposed to reduce the dimensionality of original feature set and avoid the "curse of dimensionality". Furthermore, coupled hidden Markov model (CHMM) based on multichannel data acquisition is applied to diagnose bearing or machinery fault. Two case studies are presented to validate the proposed approach both in bearing fault diagnosis and fault severity classification. The experiment results show that the proposed NCA-CHMM can remove redundant information, fuse data from different channels and improve the diagnosis results.

  7. Integration of Multiple Components in Polystyrene-based Microfluidic Devices Part 2: Cellular Analysis

    PubMed Central

    Anderson, Kari B.; Halpin, Stephen T.; Johnson, Alicia S.; Martin, R. Scott; Spence, Dana M.

    2012-01-01

    In Part II of this series describing the use of polystyrene (PS) devices for microfluidic-based cellular assays, various cellular types and detection strategies are employed to determine three fundamental assays often associated with cells. Specifically, using either integrated electrochemical sensing or optical measurements with a standard multi-well plate reader, cellular uptake, production, or release of important cellular analytes are determined on a PS-based device. One experiment involved the fluorescence measurement of nitric oxide (NO) produced within an endothelial cell line following stimulation with ATP. The result was a four-fold increase in NO production (as compared to a control), with this receptor-based mechanism of NO production verifying the maintenance of cell receptors following immobilization onto the PS substrate. The ability to monitor cellular uptake was also demonstrated by optical determination of Ca2+ into endothelial cells following stimulation with the Ca2+ ionophore A20317. The result was a significant increase (42%) in the calcium uptake in the presence of the ionophore, as compared to a control (17%) (p < 0.05). Finally, the release of catecholamines from a dopaminergic cell line (PC 12 cells) was electrochemically monitored, with the electrodes being embedded into the PS-based device. The PC 12 cells had better adherence on the PS devices, as compared to use of PDMS. Potassium-stimulation resulted in the release of 114 ± 11 µM catecholamines, a significant increase (p < 0.05) over the release from cells that had been exposed to an inhibitor (reserpine, 20 ± 2 µM of catecholamines). The ability to successfully measure multiple analytes, generated in different means from various cells under investigation, suggests that PS may be a useful material for microfluidic device fabrication, especially considering the enhanced cell adhesion to PS, its enhanced rigidity/amenability to automation, and its ability to enable a wider range of

  8. Vibration-based damage detection in an aircraft wing scaled model using principal component analysis and pattern recognition

    NASA Astrophysics Data System (ADS)

    Trendafilova, I.; Cartmell, M. P.; Ostachowicz, W.

    2008-06-01

    This study deals with vibration-based fault detection in structures and suggests a viable methodology based on principal component analysis (PCA) and a simple pattern recognition (PR) method. The frequency response functions (FRFs) of the healthy and the damaged structure are used as initial data. A PR procedure based on the nearest neighbour principle is applied to recognise between the categories of the damaged and the healthy wing data. A modified PCA method is suggested here, which not only reduces the dimensionality of the FRFs but in addition makes the PCA transformed data from the two categories more differentiable. It is applied to selected frequency bands of FRFs which permits the reduction of the PCA transformed FRFs to two new variables, which are used as damage features. In this study, the methodology is developed and demonstrated using the vibration response of a scaled aircraft wing simulated by a finite element (FE) model. The suggested damage detection methodology is based purely on the analysis of the vibration response of the structure. This makes it quite generic and permits its potential development and application for measured vibration data from real aircraft wings as well as for other real and complex structures.

  9. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  10. Quantitative analysis of multi-component complex oil spills based on the least-squares support vector regression

    NASA Astrophysics Data System (ADS)

    Tan, Ailing; Zhao, Yong; Wang, Siyuan

    2016-10-01

    Quantitative analysis of the simulated complex oil spills was researched based on PSO-LS-SVR method. Forty simulated mixture oil spills samples were made with different concentration proportions of gasoline, diesel and kerosene oil, and their near infrared spectra were collected. The parameters of least squares support vector machine were optimized by particle swarm optimization algorithm. The optimal concentration quantitative models of three-component oil spills were established. The best regularization parameter C and kernel parameter σ of gasoline, diesel and kerosene model were 48.1418 and 0.1067, 53.2820 and 0.1095, 59.1689 and 0.1000 respectively. The decision coefficient R2 of the prediction model were 0.9983, 0.9907 and 0.9942 respectively. RMSEP values were 0.0753, 0.1539 and 0.0789 respectively. For gasoline, diesel fuel and kerosene oil models, the mean value and variance value of predict absolute error were -0.0176±0.0636 μL/mL, -0.0084+/-0.1941 μL/mL, and 0.00338+/-0.0726 μL/mL respectively. The results showed that each component's concentration of the oil spills samples could be detected by the NIR technology combined with PSO-LS-SVR regression method, the predict results were accurate and reliable, thus this method can provide effective means for the quantitative detection and analysis of complex marine oil spills.

  11. Failure Analysis of Ceramic Components

    SciTech Connect

    B.W. Morris

    2000-06-29

    Ceramics are being considered for a wide range of structural applications due to their low density and their ability to retain strength at high temperatures. The inherent brittleness of monolithic ceramics requires a departure from the deterministic design philosophy utilized to analyze metallic structural components. The design program ''Ceramic Analysis and Reliability Evaluation of Structures Life'' (CARES/LIFE) developed by NASA Lewis Research Center uses a probabilistic approach to predict the reliability of monolithic components under operational loading. The objective of this study was to develop an understanding of the theories used by CARES/LIFE to predict the reliability of ceramic components and to assess the ability of CARES/LIFE to accurately predict the fast fracture behavior of monolithic ceramic components. A finite element analysis was performed to determine the temperature and stress distribution of a silicon carbide O-ring under diametral compression. The results of the finite element analysis were supplied as input into CARES/LIFE to determine the fast fracture reliability of the O-ring. Statistical material strength parameters were calculated from four-point flexure bar test data. The predicted reliability showed excellent correlation with O-ring compression test data indicating that the CARES/LIFE program can be used to predict the reliability of ceramic components subjected to complicated stress states using material properties determined from simple uniaxial tensile tests.

  12. Hybrid dimensionality reduction method based on support vector machine and independent component analysis.

    PubMed

    Moon, Sangwoo; Qi, Hairong

    2012-05-01

    This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.

  13. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    SciTech Connect

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-07

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C{sub α} atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  14. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  15. Daily PM2.5 concentration prediction based on principal component analysis and LSSVM optimized by cuckoo search algorithm.

    PubMed

    Sun, Wei; Sun, Jingyi

    2017-03-01

    Increased attention has been paid to PM2.5 pollution in China. Due to its detrimental effects on environment and health, it is important to establish a PM2.5 concentration forecasting model with high precision for its monitoring and controlling. This paper presents a novel hybrid model based on principal component analysis (PCA) and least squares support vector machine (LSSVM) optimized by cuckoo search (CS). First PCA is adopted to extract original features and reduce dimension for input selection. Then LSSVM is applied to predict the daily PM2.5 concentration. The parameters in LSSVM are fine-tuned by CS to improve its generalization. An experiment study reveals that the proposed approach outperforms a single LSSVM model with default parameters and a general regression neural network (GRNN) model in PM2.5 concentration prediction. Therefore the established model presents the potential to be applied to air quality forecasting systems.

  16. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  17. Spherical mesh adaptive direct search for separating quasi-uncorrelated sources by range-based independent component analysis.

    PubMed

    Selvan, S Easter; Borckmans, Pierre B; Chattopadhyay, A; Absil, P-A

    2013-09-01

    It is seemingly paradoxical to the classical definition of the independent component analysis (ICA), that in reality, the true sources are often not strictly uncorrelated. With this in mind, this letter concerns a framework to extract quasi-uncorrelated sources with finite supports by optimizing a range-based contrast function under unit-norm constraints (to handle the inherent scaling indeterminacy of ICA) but without orthogonality constraints. Albeit the appealing contrast properties of the range-based function (e.g., the absence of mixing local optima), the function is not differentiable everywhere. Unfortunately, there is a dearth of literature on derivative-free optimizers that effectively handle such a nonsmooth yet promising contrast function. This is the compelling reason for the design of a nonsmooth optimization algorithm on a manifold of matrices having unit-norm columns with the following objectives: to ascertain convergence to a Clarke stationary point of the contrast function and adhere to the necessary unit-norm constraints more naturally. The proposed nonsmooth optimization algorithm crucially relies on the design and analysis of an extension of the mesh adaptive direct search (MADS) method to handle locally Lipschitz objective functions defined on the sphere. The applicability of the algorithm in the ICA domain is demonstrated with simulations involving natural, face, aerial, and texture images.

  18. Simultaneous fingerprint, quantitative analysis and anti-oxidative based screening of components in Rhizoma Smilacis Glabrae using liquid chromatography coupled with Charged Aerosol and Coulometric array Detection.

    PubMed

    Yang, Guang; Zhao, Xin; Wen, Jun; Zhou, Tingting; Fan, Guorong

    2017-04-01

    An analytical approach including fingerprint, quantitative analysis and rapid screening of anti-oxidative components was established and successfully applied for the comprehensive quality control of Rhizoma Smilacis Glabrae (RSG), a well-known Traditional Chinese Medicine with the homology of medicine and food. Thirteen components were tentatively identified based on their retention behavior, UV absorption and MS fragmentation patterns. Chemometric analysis based on coulmetric array data was performed to evaluate the similarity and variation between fifteen batches. Eight discriminating components were quantified using single-compound calibration. The unit responses of those components in coulmetric array detection were calculated and compared with those of several compounds reported to possess antioxidant activity, and four of them were tentatively identified as main contributors to the total anti-oxidative activity. The main advantage of the proposed approach was that it realized simultaneous fingerprint, quantitative analysis and screening of anti-oxidative components, providing comprehensive information for quality assessment of RSG.

  19. Simultaneous multi-wavelength phase-shifting interferometry based on principal component analysis with a color CMOS

    NASA Astrophysics Data System (ADS)

    Fan, Jingping; Lu, Xiaoxu; Xu, Xiaofei; Zhong, Liyun

    2016-05-01

    From a sequence of simultaneous multi-wavelength phase-shifting interferograms (SMWPSIs) recorded by a color CMOS, a principal component analysis (PCA) based multi-wavelength interferometry (MWI) is proposed. First, a sequence of SMWPSIs with unknown phase shifts are recorded with a single-chip color CMOS camera. Subsequently, the wrapped phases of single-wavelength are retrieved with the PCA algorithm. Finally, the unambiguous phase of the extended synthetic wavelength is achieved by the subtraction between the wrapped phases of single-wavelength. In addition, to eliminate the additional phase introduced by the microscope and intensity crosstalk among three-color channels, a two-step phase compensation method with and without the measured object in the experimental system is employed. Compared with conventional single-wavelength phase-shifting interferometry, due to no requirements for phase shifts calibration and the phase unwrapping operation, the actual unambiguous phase of the measured object can be achieved with the proposed PCA-based MWI method conveniently. Both numerical simulations and experimental results demonstrate that the proposed PCA-based MWI method can enlarge not only the measuring range, but also no amplification of noise level.

  20. Multivariate Principal Component Analysis and Case-Based Reasoning for monitoring, fault detection and diagnosis in a WWTP.

    PubMed

    Ruiz, Magda; Sin, Gürkan; Berjaga, Xavier; Colprim, Jesús; Puig, Sebastià; Colomer, Joan

    2011-01-01

    The main idea of this paper is to develop a methodology for process monitoring, fault detection and predictive diagnosis of a WasteWater Treatment Plant (WWTP). To achieve this goal, a combination of Multiway Principal Component Analysis (MPCA) and Case-Based Reasoning (CBR) is proposed. First, MPCA is used to reduce the multi-dimensional nature of online process data, which summarises most of the variance of the process data in a few (new) variables. Next, the outputs of MPCA (t-scores, Q-statistic) are provided as inputs (descriptors) to the CBR method, which is employed to identify problems and propose appropriate solutions (hence diagnosis) based on previously stored cases. The methodology is evaluated on a pilot-scale SBR performing nitrogen, phosphorus and COD removal and to help to diagnose abnormal situations in the process operation. Finally, it is believed that the methodology is a promising tool for automatic diagnosis and real-time warning, which can be used for daily management of plant operation.

  1. Independent Component Analysis-Support Vector Machine-Based Computer-Aided Diagnosis System for Alzheimer's with Visual Support.

    PubMed

    Khedher, Laila; Illán, Ignacio A; Górriz, Juan M; Ramírez, Javier; Brahim, Abdelbasset; Meyer-Baese, Anke

    2017-05-01

    Computer-aided diagnosis (CAD) systems constitute a powerful tool for early diagnosis of Alzheimer's disease (AD), but limitations on interpretability and performance exist. In this work, a fully automatic CAD system based on supervised learning methods is proposed to be applied on segmented brain magnetic resonance imaging (MRI) from Alzheimer's disease neuroimaging initiative (ADNI) participants for automatic classification. The proposed CAD system possesses two relevant characteristics: optimal performance and visual support for decision making. The CAD is built in two stages: a first feature extraction based on independent component analysis (ICA) on class mean images and, secondly, a support vector machine (SVM) training and classification. The obtained features for classification offer a full graphical representation of the images, giving an understandable logic in the CAD output, that can increase confidence in the CAD support. The proposed method yields classification results up to 89% of accuracy (with 92% of sensitivity and 86% of specificity) for normal controls (NC) and AD patients, 79% of accuracy (with 82% of sensitivity and 76% of specificity) for NC and mild cognitive impairment (MCI), and 85% of accuracy (with 85% of sensitivity and 86% of specificity) for MCI and AD patients.

  2. High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang

    2016-07-01

    Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.

  3. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    PubMed

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2014-12-15

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities.

  4. Quantitative profiling of polar metabolites in herbal medicine injections for multivariate statistical evaluation based on independence principal component analysis.

    PubMed

    Jiang, Miaomiao; Jiao, Yujiao; Wang, Yuefei; Xu, Lei; Wang, Meng; Zhao, Buchang; Jia, Lifu; Pan, Hao; Zhu, Yan; Gao, Xiumei

    2014-01-01

    Botanical primary metabolites extensively exist in herbal medicine injections (HMIs), but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR) profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI), was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA), the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs). IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids) of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs.

  5. Inductive robust principal component analysis.

    PubMed

    Bao, Bing-Kun; Liu, Guangcan; Xu, Changsheng; Yan, Shuicheng

    2012-08-01

    In this paper we address the error correction problem that is to uncover the low-dimensional subspace structure from high-dimensional observations, which are possibly corrupted by errors. When the errors are of Gaussian distribution, Principal Component Analysis (PCA) can find the optimal (in terms of least-square-error) low-rank approximation to highdimensional data. However, the canonical PCA method is known to be extremely fragile to the presence of gross corruptions. Recently, Wright et al. established a so-called Robust Principal Component Analysis (RPCA) method, which can well handle grossly corrupted data [14]. However, RPCA is a transductive method and does not handle well the new samples which are not involved in the training procedure. Given a new datum, RPCA essentially needs to recalculate over all the data, resulting in high computational cost. So, RPCA is inappropriate for the applications that require fast online computation. To overcome this limitation, in this paper we propose an Inductive Robust Principal Component Analysis (IRPCA) method. Given a set of training data, unlike RPCA that targets on recovering the original data matrix, IRPCA aims at learning the underlying projection matrix, which can be used to efficiently remove the possible corruptions in any datum. The learning is done by solving a nuclear norm regularized minimization problem, which is convex and can be solved in polynomial time. Extensive experiments on a benchmark human face dataset and two video surveillance datasets show that IRPCA can not only be robust to gross corruptions, but also handle well the new data in an efficient way.

  6. Principal components analysis competitive learning.

    PubMed

    López-Rubio, Ezequiel; Ortiz-de-Lazcano-Lobato, Juan Miguel; Muñoz-Pérez, José; Gómez-Ruiz, José Antonio

    2004-11-01

    We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA. Furthermore, every neuron is able to modify its behavior to adapt to the local dimensionality of the input distribution. Hence, our model has a dimensionality estimation capability. The experimental results we present show the dimensionality-reduction capabilities of the model with multisensor images.

  7. Estimating stellar atmospheric parameters, absolute magnitudes and elemental abundances from the LAMOST spectra with Kernel-based principal component analysis

    NASA Astrophysics Data System (ADS)

    Xiang, M.-S.; Liu, X.-W.; Shi, J.-R.; Yuan, H.-B.; Huang, Y.; Luo, A.-L.; Zhang, H.-W.; Zhao, Y.-H.; Zhang, J.-N.; Ren, J.-J.; Chen, B.-Q.; Wang, C.; Li, J.; Huo, Z.-Y.; Zhang, W.; Wang, J.-L.; Zhang, Y.; Hou, Y.-H.; Wang, Y.-F.

    2017-01-01

    Accurate determination of stellar atmospheric parameters and elemental abundances is crucial for Galactic archaeology via large-scale spectroscopic surveys. In this paper, we estimate stellar atmospheric parameters - effective temperature Teff, surface gravity log g and metallicity [Fe/H], absolute magnitudes MV and MKs, α-element to metal (and iron) abundance ratio [α/M] (and [α/Fe]), as well as carbon and nitrogen abundances [C/H] and [N/H] from the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) spectra with a multivariate regression method based on kernel-based principal component analysis, using stars in common with other surveys (Hipparcos, Kepler, Apache Point Observatory Galactic Evolution Experiment) as training data sets. Both internal and external examinations indicate that given a spectral signal-to-noise ratio (SNR) better than 50, our method is capable of delivering stellar parameters with a precision of ˜100 K for Teff, ˜0.1 dex for log g, 0.3-0.4 mag for MV and MKs, 0.1 dex for [Fe/H], [C/H] and [N/H], and better than 0.05 dex for [α/M] ([α/Fe]). The results are satisfactory even for a spectral SNR of 20. The work presents first determinations of [C/H] and [N/H] abundances from a vast data set of LAMOST, and, to our knowledge, the first reported implementation of absolute magnitude estimation directly based on a vast data set of observed spectra. The derived stellar parameters for millions of stars from the LAMOST surveys will be publicly available in the form of value-added catalogues.

  8. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process.

  9. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    PubMed

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species

  10. Functional Generalized Structured Component Analysis.

    PubMed

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  11. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  12. Fast Steerable Principal Component Analysis.

    PubMed

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-03-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.

  13. Relating Essential Proteins to Drug Side-Effects Using Canonical Component Analysis: A Structure-Based Approach.

    PubMed

    Liu, Tianyun; Altman, Russ B

    2015-07-27

    The molecular mechanism of many drug side-effects is unknown and difficult to predict. Previous methods for explaining side-effects have focused on known drug targets and their pathways. However, low affinity binding to proteins that are not usually considered drug targets may also drive side-effects. In order to assess these alternative targets, we used the 3D structures of 563 essential human proteins systematically to predict binding to 216 drugs. We first benchmarked our affinity predictions with available experimental data. We then combined singular value decomposition and canonical component analysis (SVD-CCA) to predict side-effects based on these novel target profiles. Our method predicts side-effects with good accuracy (average AUC: 0.82 for side effects present in <50% of drug labels). We also noted that side-effect frequency is the most important feature for prediction and can confound efforts at elucidating mechanism; our method allows us to remove the contribution of frequency and isolate novel biological signals. In particular, our analysis produces 2768 triplet associations between 50 essential proteins, 99 drugs, and 77 side-effects. Although experimental validation is difficult because many of our essential proteins do not have validated assays, we nevertheless attempted to validate a subset of these associations using experimental assay data. Our focus on essential proteins allows us to find potential associations that would likely be missed if we used recognized drug targets. Our associations provide novel insights about the molecular mechanisms of drug side-effects and highlight the need for expanded experimental efforts to investigate drug binding to proteins more broadly.

  14. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  15. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  16. Bayesian robust principal component analysis.

    PubMed

    Ding, Xinghao; He, Lihan; Carin, Lawrence

    2011-12-01

    A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.

  17. [Analysis and comparison of intestinal absorption of components of Gegenqinlian decoction in different combinations based on pharmacokinetic parameters].

    PubMed

    Zhang, Yi-Zhu; An, Rui; Yuan, Jin; Wang, Yue; Gu, Qing-Qing; Wang, Xin-Hong

    2013-10-01

    To analyse and compare the characteristics of the intestinal absorption of puerarin, baicalin, berberine and liquiritin in different combinations of Gegenqinlian decoction based on pharmacokinetic parameters, a sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) method was applied for the quantification of four components in rat's plasma. And pharmacokinetic parameters were determined from the plasma concentration-time data with the DAS software package. The influence of different combinations on pharmacokinetics of four components was studied to analyse and compare the absorption difference of four components, together with the results of the in vitro everted gut model and the rat single pass intestinal perfusion model. The results showed that compared with other combinations, the AUC values of puerarin, baicalin and berberine were increased significantly in Gegenqinlian decoction group, while the AUC value of liquiritin was reduced. Moreover, the absorption of four components was increased significantly supported by the results from the in vitro everted gut model and the rat single pass intestinal perfusion model, which indicated that the Gegenqinlian decoction may promote the absorption of four components and accelerate the metabolism of liquiritin by the cytochrome P450.

  18. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  19. Principal component analysis-based anatomical motion models for use in adaptive radiation therapy of head and neck cancer patients

    NASA Astrophysics Data System (ADS)

    Chetvertkov, Mikhail A.

    Purpose: To develop standard and regularized principal component analysis (PCA) models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients, assess their potential use in adaptive radiation therapy (ART), and to extract quantitative information for treatment response assessment. Methods: Planning CT (pCT) images of H&N patients were artificially deformed to create "digital phantom" images, which modeled systematic anatomical changes during Radiation Therapy (RT). Artificial deformations closely mirrored patients' actual deformations, and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms), and between pCT and clinical CBCTs. Patient-specific standard PCA (SPCA) and regularized PCA (RPCA) models were built from these synthetic and clinical DVF sets. Eigenvectors, or eigenDVFs (EDVFs), having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Modeled anatomies were used to assess the dose deviations with respect to the planned dose distribution. Results: PCA models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade SPCA's ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes, and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. For dose assessment it has been shown that the modeled dose distribution was different from the planned dose for the parotid glands due to their shrinkage and shift into

  20. System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Wada, Ted S.

    In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the acoustic echo cancellation (AEC) problem. Such a “system” perspective aims for the integration of individual components, or algorithms, into a cohesive unit for the benefit of the system as a whole to cope with real-world enhancement problems. The standard system identification approach by minimizing the mean square error (MSE) of a linear system is sensitive to distortions that greatly affect the quality of the identification result. Therefore, we begin by examining in detail the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of the LMS algorithm when there is an interference at the near end, where the source of distortion may be linear or nonlinear. We provide a thorough derivation and analysis of the error recovery nonlinearity (ERN) that “enhances” the filter estimation error prior to the adaptation to transform the corrupted error’s distribution into a desired one, or very close to it, in order to assist the linear adaptation process. We reveal important connections of the residual echo enhancement (REE) technique to other existing AEC and signal enhancement procedures, where the technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. Notably, the single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. Indeed, SBSS optimized via ICA leads to the system combination of the LMS algorithm with the ERN that allows continuous and stable adaptation even during double talk. Next, we extend the system perspective

  1. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  2. Modeling and Prediction of Monthly Total Ozone Concentrations by Use of an Artificial Neural Network Based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Surajit; Chattopadhyay, Goutami

    2012-10-01

    In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.

  3. Independent component analysis in spiking neurons.

    PubMed

    Savin, Cristina; Joshi, Prashant; Triesch, Jochen

    2010-04-22

    Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.

  4. Evaluation of the aroma quality of Chinese traditional soy paste during storage based on principal component analysis.

    PubMed

    Peng, Xingyun; Li, Xin; Shi, Xiaodi; Guo, Shuntang

    2014-05-15

    Soy paste, a fermented soybean product, is widely used for flavouring in East and Southeast Asian countries. The characteristic aroma of soy paste is important throughout its shelf life. This study extracted volatile compounds via headspace solid-phase microextraction and conducted a quantitative analysis of 15 key volatile compounds using gas chromatography and gas chromatography-mass spectrum analysis. Changes in aroma content during storage time were analyzed using an acceleration model (40 °C, 28 days). In the 28 days of storage, results showed that among key soy paste volatile compounds, alcohol and aldehyde contents decreased by 35% and 26%, respectively. By contrast, acid, ester, and heterocycle contents increased by 130%, 242%, and 15%, respectively. The overall odour type transformed from a floral to a roasting aroma. According to sample clustering in the principal component analysis, the storage life of soy paste could be divided into three periods. These three periods represent the floral, roasting, and pungent aroma types of soy paste.

  5. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  6. Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  7. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  8. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    PubMed

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization.

  9. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  10. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  11. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    NASA Astrophysics Data System (ADS)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  12. Predicting tissue conductivity influences on body surface potentials-an efficient approach based on principal component analysis.

    PubMed

    Weber, Frank M; Keller, David U J; Bauer, Stefan; Seemann, Gunnar; Lorenz, Cristian; Dossel, Olaf

    2011-02-01

    In this paper, we present an efficient method to estimate changes in forward-calculated body surface potential maps (BSPMs) caused by variations in tissue conductivities. For blood, skeletal muscle, lungs, and fat, the influence of conductivity variations was analyzed using the principal component analysis (PCA). For each single tissue, we obtained the first PCA eigenvector from seven sample simulations with conductivities between ±75% of the default value. We showed that this eigenvector was sufficient to estimate the signal over the whole conductivity range of ±75%. By aligning the origins of the different PCA coordinate systems and superimposing the single tissue effects, it was possible to estimate the BSPM for combined conductivity variations in all four tissues. Furthermore, the method can be used to easily calculate confidence intervals for the signal, i.e., the minimal and maximal possible amplitudes for given conductivity uncertainties. In addition to that, it was possible to determine the most probable conductivity values for a given BSPM signal. This was achieved by probing hundreds of different conductivity combinations with a numerical optimization scheme. In conclusion, our method allows to efficiently predict forward-calculated BSPMs over a wide range of conductivity values from few sample simulations.

  13. The Components of Microbiological Risk Analysis.

    PubMed

    Liuzzo, Gaetano; Bentley, Stefano; Giacometti, Federica; Serraino, Andrea

    2015-02-03

    The paper describes the process of risk analysis in a food safety perspective. The steps of risk analysis defined as a process consisting of three interconnected components (risk assessment, risk management, and risk communication) are analysed. The different components of the risk assessment, risk management and risk communication are further described.

  14. The Components of Microbiological Risk Analysis

    PubMed Central

    Bentley, Stefano; Giacometti, Federica; Serraino, Andrea

    2015-01-01

    The paper describes the process of risk analysis in a food safety perspective. The steps of risk analysis defined as a process consisting of three interconnected components (risk assessment, risk management, and risk communication) are analysed. The different components of the risk assessment, risk management and risk communication are further described. PMID:27800384

  15. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  16. Suppression of inter-device variation for component analysis of turbid liquids based on spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Shengzhao; Zhang, Linna; Li, Gang; Lin, Ling

    2017-03-01

    Diffuse reflectance spectroscopy is a useful tool for obtaining quantitative information in turbid media, which is always achieved by developing a multivariate regression model that links the spectral signal to the component concentrations. However, in most cases, variations between the actual measurement and the modeling process of the device may cause errors in predicting a component's concentration. In this paper, we propose a data-processing method to resist these variations. The method involves performing a curve fitting of the multiple-position diffuse reflectance spectral data. One of the parameters in the fitting function was found to be insensitive to inter-device variations and sensitive to the component concentrations. The parameter of the fitted equation was used in the modeling instead of directly using the spectral signal. Experiments demonstrate the feasibility of the proposed method and its resistance to errors induced by inter-device variations.

  17. Comparison of dimension reduction-based logistic regression models for case-control genome-wide association study: principal components analysis vs. partial least squares

    PubMed Central

    Yi, Honggang; Wo, Hongmei; Zhao, Yang; Zhang, Ruyang; Dai, Junchen; Jin, Guangfu; Ma, Hongxia; Wu, Tangchun; Hu, Zhibin; Lin, Dongxin; Shen, Hongbing; Chen, Feng

    2015-01-01

    Abstract With recent advances in biotechnology, genome-wide association study (GWAS) has been widely used to identify genetic variants that underlie human complex diseases and traits. In case-control GWAS, typical statistical strategy is traditional logistical regression (LR) based on single-locus analysis. However, such a single-locus analysis leads to the well-known multiplicity problem, with a risk of inflating type I error and reducing power. Dimension reduction-based techniques, such as principal component-based logistic regression (PC-LR), partial least squares-based logistic regression (PLS-LR), have recently gained much attention in the analysis of high dimensional genomic data. However, the performance of these methods is still not clear, especially in GWAS. We conducted simulations and real data application to compare the type I error and power of PC-LR, PLS-LR and LR applicable to GWAS within a defined single nucleotide polymorphism (SNP) set region. We found that PC-LR and PLS can reasonably control type I error under null hypothesis. On contrast, LR, which is corrected by Bonferroni method, was more conserved in all simulation settings. In particular, we found that PC-LR and PLS-LR had comparable power and they both outperformed LR, especially when the causal SNP was in high linkage disequilibrium with genotyped ones and with a small effective size in simulation. Based on SNP set analysis, we applied all three methods to analyze non-small cell lung cancer GWAS data. PMID:26243516

  18. Component analysis and target cell-based neuroactivity screening of Panax ginseng by ultra-performance liquid chromatography coupled with quadrupole-time-of-flight mass spectrometry.

    PubMed

    Yuan, Jinbin; Chen, Yang; Liang, Jian; Wang, Chong-Zhi; Liu, Xiaofei; Yan, Zhihong; Tang, Yi; Li, Jiankang; Yuan, Chun-Su

    2016-12-01

    Ginseng is one of the most widely used natural medicines in the world. Recent studies have suggested Panax ginseng has a wide range of beneficial effects on aging, central nervous system disorders, and neurodegenerative diseases. However, knowledge about the specific bioactive components of ginseng is still limited. This work aimed to screen for the bioactive components in Panax ginseng that act against neurodegenerative diseases, using the target cell-based bioactivity screening method. Firstly, component analysis of Panax ginseng extracts was performed by UPLC-QTOF-MS, and a total of 54 compounds in white ginseng were characterized and identified according to the retention behaviors, accurate MW, MS characteristics, parent nucleus, aglycones, side chains, and literature data. Then target cell-based bioactivity screening method was developed to predict the candidate compounds in ginseng with SH-SY5Y cells. Four ginsenosides, Rg2, Rh1, Ro, and Rd, were observed to be active. The target cell-based bioactivity screening method coupled with UPLC-QTOF-MS technique has suitable sensitivity and it can be used as a screening tool for low content bioactive constituents in natural products.

  19. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  20. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  1. Compressive-projection principal component analysis.

    PubMed

    Fowler, James E

    2009-10-01

    Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its data-dependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resource-constrained settings such as satellite-borne sensors. A process is presented that effectively shifts the computational burden of PCA from the resource-constrained encoder to a presumably more capable base-station decoder. The proposed approach, compressive-projection PCA (CPPCA), is driven by projections at the sensor onto lower-dimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convex-set optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionality-reduction and compression performance to be realized in an light-encoder/heavy-decoder system architecture. In experimental results, CPPCA outperforms a multiple-vector variant of compressed sensing for the reconstruction of hyperspectral data.

  2. Finite Element Based Stress Analysis of Graphite Component in High Temperature Gas Cooled Reactor Core Using Linear and Nonlinear Irradiation Creep Models

    SciTech Connect

    Mohanty, Subhasish; Majumdar, Saurindranath

    2015-01-01

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  3. Accurate lithography hotspot detection based on principal component analysis-support vector machine classifier with hierarchical data clustering

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Gao, Jhih-Rong; Ding, Duo; Zeng, Xuan; Pan, David Z.

    2015-01-01

    As technology nodes continue to shrink, layout patterns become more sensitive to lithography processes, resulting in lithography hotspots that need to be identified and eliminated during physical verification. We propose an accurate hotspot detection approach based on principal component analysis-support vector machine classifier. Several techniques, including hierarchical data clustering, data balancing, and multilevel training, are provided to enhance the performance of the proposed approach. Our approach is accurate and more efficient than conventional time-consuming lithography simulation and provides a high flexibility for adapting to new lithography processes and rules.

  4. Calculation of the elastic properties of prosthetic knee components with an iterative finite element-based modal analysis: quantitative comparison of different measuring techniques.

    PubMed

    Woiczinski, Matthias; Tollrian, Christopher; Schröder, Christian; Steinbrück, Arnd; Müller, Peter E; Jansson, Volkmar

    2013-08-01

    With the aging but still active population, research on total joint replacements relies increasingly on numerical methods, such as finite element analysis, to improve wear resistance of components. However, the validity of finite element models largely depends on the accuracy of their material behavior and geometrical representation. In particular, material properties are often based on manufacturer data or literature reports, but can alternatively be estimated by matching experimental measurements and structural predictions through modal analyses and identification of eigenfrequencies. The aim of the present study was to compare the accuracy of common setups used for estimating the eigenfrequencies of typical components often used in prosthetized joints. Eigenfrequencies of cobalt-chrome and ultra-high-molecular weight polyethylene components were therefore measured with four different setups, and used in modal analyses of corresponding finite element models for an iterative adjustment of their material properties. Results show that for the low-damped cobalt chromium endoprosthesis components, all common measuring setups provided accurate measurements. In the case of high-damped structures, measurements were only possible with setups including a continuously excitation system such as electrodynamic shakers. This study demonstrates that the iterative back-calculation of eigenfrequencies can be a reliable method to estimate the elastic properties for finite element models.

  5. Components of Task-Based Needs Analysis of the ESP Learners with the Specialization of Business and Tourism

    ERIC Educational Resources Information Center

    Poghosyan, Naira

    2016-01-01

    In the following paper we shall thoroughly analyze the target learning needs of the learners within an ESP (English for Specific Purposes) context. The main concerns of ESP have always been and remain with the needs analysis, text analysis and preparing learners to communicate effectively in the tasks prescribed by their study or work situation.…

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  7. Independent component analysis for biomedical signals.

    PubMed

    James, Christopher J; Hesse, Christian W

    2005-02-01

    Independent component analysis (ICA) is increasing in popularity in the field of biomedical signal processing. It is generally used when it is required to separate measured multi-channel biomedical signals into their constituent underlying components. The use of ICA has been facilitated in part by the free availability of toolboxes that implement popular flavours of the techniques. Fundamentally ICA in biomedicine involves the extraction and separation of statistically independent sources underlying multiple measurements of biomedical signals. Technical advances in algorithmic developments implementing ICA are reviewed along with new directions in the field. These advances are specifically summarized with applications to biomedical signals in mind. The basic assumptions that are made when applying ICA are discussed, along with their implications when applied particularly to biomedical signals. ICA as a specific embodiment of blind source separation (BSS) is also discussed, and as a consequence the criterion used for establishing independence between sources is reviewed and this leads to the introduction of ICA/BSS techniques based on time, frequency and joint time-frequency decomposition of the data. Finally, advanced implementations of ICA are illustrated as applied to neurophysiologic signals in the form of electro-magnetic brain signals data.

  8. Component fragilities. Data collection, analysis and interpretation

    SciTech Connect

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists.

  9. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome.

  10. PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Wang, Weichen

    2016-01-01

    This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374

  11. Pse-Analysis: a python package for DNA/RNA and protein/ peptide sequence analysis based on pseudo components and kernel methods.

    PubMed

    Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen

    2017-01-05

    To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.

  12. Energy component analysis of π interactions.

    PubMed

    Sherrill, C David

    2013-04-16

    Fundamental features of biomolecules, such as their structure, solvation, and crystal packing and even the docking of drugs, rely on noncovalent interactions. Theory can help elucidate the nature of these interactions, and energy component analysis reveals the contributions from the various intermolecular forces: electrostatics, London dispersion terms, induction (polarization), and short-range exchange-repulsion. Symmetry-adapted perturbation theory (SAPT) provides one method for this type of analysis. In this Account, we show several examples of how SAPT provides insight into the nature of noncovalent π-interactions. In cation-π interactions, the cation strongly polarizes electrons in π-orbitals, leading to substantially attractive induction terms. This polarization is so important that a cation and a benzene attract each other when placed in the same plane, even though a consideration of the electrostatic interactions alone would suggest otherwise. SAPT analysis can also support an understanding of substituent effects in π-π interactions. Trends in face-to-face sandwich benzene dimers cannot be understood solely in terms of electrostatic effects, especially for multiply substituted dimers, but SAPT analysis demonstrates the importance of London dispersion forces. Moreover, detailed SAPT studies also reveal the critical importance of charge penetration effects in π-stacking interactions. These effects arise in cases with substantial orbital overlap, such as in π-stacking in DNA or in crystal structures of π-conjugated materials. These charge penetration effects lead to attractive electrostatic terms where a simpler analysis based on atom-centered charges, electrostatic potential plots, or even distributed multipole analysis would incorrectly predict repulsive electrostatics. SAPT analysis of sandwich benzene, benzene-pyridine, and pyridine dimers indicates that dipole/induced-dipole terms present in benzene-pyridine but not in benzene dimer are relatively

  13. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  14. Independent component analysis for audio signal separation

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens; Gnann, Volker

    2005-10-01

    In this paper an audio separation algorithm is presented, which is based on Independent Component Analysis (ICA). Audio separation could be the basis for many applications for example in the field of telecommunications, quality enhancement of audio recordings or audio classification tasks. Well known ICA algorithms are not usable for real-world recordings at the time, because they are designed for signal mixtures based on linear and over time constant mixing matrices. To adapt a standard ICA algorithm for real-world two-channel auditory scenes with two audio sources, the input audio streams are segmented in the time domain and a constant mixing matrix within a segment is assumed. The next steps are a time-delay estimation for each audio source in the mixture and a determination of the number of existing sources. In the following processing steps, for each source the input signals are time shifted and a standard ICA for linear mixtures is performed. After that, the remaining tasks are an evaluation of the ICA results and the construction of the resulting audio streams containing the separated sources.

  15. Principal Component Analysis of Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Cleary, Colleen; Rodriguez, David

    2017-01-01

    Principal component analysis is a technique for reducing variables and emphasizing patterns in a data set. In this study, the data set consisted of the attributes of 174 brown dwarfs. The PCA was performed on several photometric measurements in near-infrared wavelengths and colors in order to determine if these variables showed a correlation with the physical parameters. This research resulted in two separate models that predict luminosity and temperature. The application of principal component analysis on the near-infrared photometric measurements and colors of brown dwarfs, along with models, provides alternate methods for predicting the luminosity and temperature of brown dwarfs using only photometric measurements.

  16. Identifying fouling events in a membrane-based drinking water treatment process using principal component analysis of fluorescence excitation-emission matrices.

    PubMed

    Peiris, Ramila H; Hallé, Cynthia; Budman, Hector; Moresoli, Christine; Peldszus, Sigrid; Huck, Peter M; Legge, Raymond L

    2010-01-01

    The identification of key foulants and the provision of early warning of high fouling events for drinking water treatment membrane processes is crucial for the development of effective countermeasures to membrane fouling, such as pretreatment. Principal foulants include organic, colloidal and particulate matter present in the membrane feed water. In this research, principal component analysis (PCA) of fluorescence excitation-emission matrices (EEMs) was identified as a viable tool for monitoring the performance of pre-treatment stages (in this case biological filtration), as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. In addition, fluorescence EEM-based principal component (PC) score plots, generated using the fluorescence EEMs obtained after just 1hour of UF or NF operation, could be related to high fouling events likely caused by elevated levels of particulate/colloid-like material in the biofilter effluents. The fluorescence EEM-based PCA approach presented here is sensitive enough to be used at low organic carbon levels and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken.

  17. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  18. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  19. Stochastic convex sparse principal component analysis.

    PubMed

    Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu

    2016-12-01

    Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.

  20. Principal component analysis of phenolic acid spectra

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  1. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  2. Selection of principal components based on Fisher discriminant ratio

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyan; Naghedolfeizi, Masoud; Arora, Sanjeev; Yousif, Nabil; Aberra, Dawit

    2016-05-01

    Principal component analysis transforms a set of possibly correlated variables into uncorrelated variables, and is widely used as a technique of dimensionality reduction and feature extraction. In some applications of dimensionality reduction, the objective is to use a small number of principal components to represent most variation in the data. On the other hand, the main purpose of feature extraction is to facilitate subsequent pattern recognition and machine learning tasks, such as classification. Selecting principal components for classification tasks aims for more than dimensionality reduction. The capability of distinguishing different classes is another major concern. Components that have larger eigenvalues do not necessarily have better distinguishing capabilities. In this paper, we investigate a strategy of selecting principal components based on the Fisher discriminant ratio. The ratio of between class variance to within class variance is calculated for each component, based on which the principal components are selected. The number of relevant components is determined by the classification accuracy. To alleviate overfitting which is common when there are few training data available, we use a cross-validation procedure to determine the number of principal components. The main objective is to select the components that have large Fisher discriminant ratios so that adequate class separability is obtained. The number of selected components is determined by the classification accuracy of the validation data. The selection method is evaluated by face recognition experiments.

  3. Medical diagnosis of atherosclerosis from Carotid Artery Doppler Signals using principal component analysis (PCA), k-NN based weighting pre-processing and Artificial Immune Recognition System (AIRS).

    PubMed

    Latifoğlu, Fatma; Polat, Kemal; Kara, Sadik; Güneş, Salih

    2008-02-01

    In this study, we proposed a new medical diagnosis system based on principal component analysis (PCA), k-NN based weighting pre-processing, and Artificial Immune Recognition System (AIRS) for diagnosis of atherosclerosis from Carotid Artery Doppler Signals. The suggested system consists of four stages. First, in the feature extraction stage, we have obtained the features related with atherosclerosis disease using Fast Fourier Transformation (FFT) modeling and by calculating of maximum frequency envelope of sonograms. Second, in the dimensionality reduction stage, the 61 features of atherosclerosis disease have been reduced to 4 features using PCA. Third, in the pre-processing stage, we have weighted these 4 features using different values of k in a new weighting scheme based on k-NN based weighting pre-processing. Finally, in the classification stage, AIRS classifier has been used to classify subjects as healthy or having atherosclerosis. Hundred percent of classification accuracy has been obtained by the proposed system using 10-fold cross validation. This success shows that the proposed system is a robust and effective system in diagnosis of atherosclerosis disease.

  4. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  5. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  6. Modeling the Correlation of Composition-Processing-Property for TC11 Titanium Alloy Based on Principal Component Analysis and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Zeng, Weidong; Zhao, Yongqing; Shao, Yitao; Zhou, Yigang

    2012-11-01

    In the present investigation, the correlation of composition-processing-property for TC11 titanium alloy was established using principal component analysis (PCA) and artificial neural network (ANN) based on the experimental datasets obtained from the forging experiments. During the PCA step, the feature vector is extracted by calculating the eigenvalue of correlation coefficient matrix for training dataset, and the dimension of input variables is reduced from 11 to 6 features. Thus, PCA offers an efficient method to characterize the data with a high degree of dimensionality reduction. During the ANN step, the principal components were chosen as the input parameters and the mechanical properties as the output parameters, including the ultimate tensile strength ( \\upsigma_{{b}} ), yield strength ( \\upsigma_{0.2} ), elongation ( \\updelta ), and reduction of area (φ). The training of ANN model was conducted using back-propagation learning algorithm. The results clearly present ideal agreement between the predicted value of PCA-ANN model and experimental value, indicating that the established model is a powerful tool to construct the correlation of composition-processing-property for TC11 titanium alloy. More importantly, the integrated method of PCA and ANN is also able to be utilized as the mechanical property prediction for the other alloys.

  7. PCA: Principal Component Analysis for spectra modeling

    NASA Astrophysics Data System (ADS)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  8. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  9. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  10. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data.

  11. Nonlinear principal component analysis of climate data

    NASA Astrophysics Data System (ADS)

    Monahan, Adam Hugh

    2000-11-01

    A nonlinear generalisation of Principal Component Analysis (PCA), denoted Nonlinear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of climate data. It is found empirically that NLPCA partitions variance in the same fashion as does PCA. An important distinction is drawn between a modal P-dimensional NLPCA analysis, in which the approximation is the sum of P nonlinear functions of one variable, and a nonmodal analysis, in which the P-dimensional NLPCA approximation is determined as a nonlinear non- additive function of P variables. Nonlinear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor. The 1D and 2D NLPCA approximations explain 76% and 99.5% of the total variance, respectively, in contrast to 60% and 95% explained by the 1D and 2D PCA approximations. When applied to a data set consisting of monthly-averaged tropical Pacific Ocean sea surface temperatures (SST), the modal 1D NLPCA approximation describes average variability associated with the El Niño/Southern Oscillation (ENSO) phenomenon, as does the 1D PCA approximation. The NLPCA approximation, however, characterises the asymmetry in spatial pattern of SST anomalies between average warm and cold events in a manner that the PCA approximation cannot. The second NLPCA mode of SST is found to characterise differences in ENSO variability between individual events, and in particular is consistent with the celebrated 1977 ``regime shift''. A 2D nonmodal NLPCA approximation is determined, the interpretation of which is complicated by the fact that a secondary feature extraction problem has to be carried out. It is found that this approximation contains much the same information as that provided by the modal analysis. A modal NLPC analysis of tropical Indo-Pacific sea level pressure (SLP) finds that the first mode describes average ENSO variability in this field, and also characterises an asymmetry in SLP fields between average warm and

  12. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  13. Principal components analysis of Jupiter VIMS spectra

    USGS Publications Warehouse

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  14. [Assessment of aquatic ecosystem health based on principal component analysis with entropy weight: a case study of Wanning Reservoir (Hainan Island, China)].

    PubMed

    Xie, Fei; Gu, Ji-Guang; Lin, Zhang-Wen

    2014-06-01

    A new assessment method based on principal component analysis (PCA) and entropy weight for ecosystem health was applied to Wanning Reservoir, Hainan Island, China to investigate whether the new method could solve the overlap in weighting which existed in the traditional entropy weight-based method for ecosystem health. The results showed that, the ecosystem health status of Wanning Reservoir showed an improvement trend overall from 2010 to 2012; the means of ecosystem health comprehensive index (EHCI) in each year were 0.534, 0.617, 0.634 for 2010, 2011 and 2012 respectively, and the ecosystem health status was III (medium), II (good), and II (good), respectively. In addition, the ecosystem health status of the reservoir displayed a weak seasonal variation. The variation of EHCI became smaller recently, showing that Wanning Reservoir tended to be relatively stable. Comparison of the weight of indices in the new and the traditional methods indicated that, the cumulative weight of the four indices (i. e., DO, COD, BOD, and NH(4+)-N) had a stronger correlation of 0.382 for the traditional one than that (0.178) for the new method. It suggested the application of PCA with entropy could avoid the overlap in weighting effectively. In addition, the correlation analysis between the trophic status index and EHCI showed significant negative correlation (P < 0.05), indicating that the new method based on PCA with entropy weight could improve not only the assignment of weighting but also the accuracy of the results. The new method here is suitable for evaluating ecosystem health of the reservoir.

  15. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  16. Spectral Components Analysis of Diffuse Emission Processes

    SciTech Connect

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  17. Scaling in ANOVA-simultaneous component analysis.

    PubMed

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  18. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  19. WE-G-18C-09: Separating Perfusion and Diffusion Components From Diffusion Weighted MRI of Rectum Tumors Based On Intravoxel Incoherent Motion (IVIM) Analysis

    SciTech Connect

    Tyagi, N; Wengler, K; Mazaheri, Y; Hunt, M; Deasy, J; Gollub, M

    2014-06-15

    Purpose: Pseudodiffusion arises from the microcirculation of blood in the randomly oriented capillary network and contributes to the signal decay acquired using a multi-b value diffusion weighted (DW)-MRI sequence. This effect is more significant at low b-values and should be properly accounted for in apparent diffusion coefficient (ADC) calculations. The purpose of this study was to separate perfusion and diffusion component based on a biexponential and a segmented monoexponential model using IVIM analysis Methods. The signal attenuation is modeled as S(b) = S0[(1−f)exp(−bD) + fexp(−bD*)]. Fitting the biexponetial decay leads to the quantification of D, the true diffusion coefficient, D*, the pseudodiffusion coefficient, and f, the perfusion fraction. A nonlinear least squares fit and two segmented monoexponential models were used to derive the values for D, D*,‘and f. In the segmented approach b = 200 s/mm{sup 2} was used as the cut-off value for calculation of D. DW-MRI's of a rectum cancer patient were acquired before chemotherapy, before radiation therapy (RT), and 4 weeks into RT and were investigated as an example case. Results: Mean ADC for the tumor drawn on the DWI cases was 0.93, 1.0 and 1.13 10{sup −3}×mm{sup 2}/s before chemotherapy, before RT and 4 weeks into RT. The mean (D.10{sup −3} × mm{sup 2}/s, D* 10{sup −3} × mm{sup 2}/s, and f %) based on biexponential fit was (0.67, 18.6, and 27.2%), (0.72, 17.7, and 28.9%) and (0.83,15.1, and 30.7%) at these time points. The mean (D, D* f) based on segmented fit was (0.72, 10.5, and 12.1%), (0.72, 8.2, and 17.4%) and (.82, 8.1, 16.5%) Conclusion: ADC values are typically higher than true diffusion coefficients. For tumors with significant perfusion effect, ADC should be analyzed at higher b-values or separated from the perfusion component. Biexponential fit overestimates the perfusion fraction because of increased sensitivity to noise at low b-values.

  20. Structural analysis methods development for turbine hot section components

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1989-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine HOT Section Technoloogy (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  1. Principal component analysis-T1ρ voxel based relaxometry of the articular cartilage: a comparison of biochemical patterns in osteoarthritis and anterior cruciate ligament subjects

    PubMed Central

    Russell, Colin; Randolph, Allison; Li, Xiaojuan; Majumdar, Sharmila

    2016-01-01

    Background Quantitative MR, including T1ρ mapping, has been extensively used to probe early biochemical changes in knee articular cartilage of subjects with osteoarthritis (OA) and others at risk for cartilage degeneration, such as those with anterior cruciate ligament (ACL) injury and reconstruction. However, limited studies have been performed aimed to assess the spatial location and patterns of T1ρ. In this study we used a novel voxel-based relaxometry (VBR) technique coupled with principal component analysis (PCA) to extract relevant features so as to describe regional patterns and to investigate their similarities and differences in T1ρ maps in subjects with OA and subjects six months after ACL reconstruction (ACLR). Methods T1ρ quantitative MRI images were collected for 180 subjects from two separate cohorts. The OA cohort included 93 osteoarthritic patients and 25 age-matched controls. The ACLR-6M cohort included 52 patients with unilateral ACL tears who were imaged 6 months after ACL reconstruction, and 10 age-matched controls. Non-rigid registration on a single template and local Z-score conversion were adopted for T1ρ spatial and intensity normalization of all the images in the dataset. PCA was used as a data dimensionality reduction to obtain a description of all subjects in a 10-dimensional feature space. Logistic linear regression was used to identify distinctive features of OA and ACL subjects Results Global prolongation of the Z-score was observed in both OA and ACL subjects compared to controls [higher values in 1st principal component (PC1); P=0.01]. In addition, relaxation time differences between superficial and deep cartilage layers of the lateral tibia and trochlea were observed to be significant distinctive features between OA and ACL subjects. OA subjects demonstrated similar values between the two cartilage layers [higher value in 2nd principal component (PC2); P=0.008], while ACL reconstructed subjects showed T1ρ prolongation

  2. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  3. Impact of parameter fluctuations on the performance of ethanol precipitation in production of Re Du Ning Injections, based on HPLC fingerprints and principal component analysis.

    PubMed

    Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei

    2016-01-01

    The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products.

  4. Practical Issues in Component Aging Analysis

    SciTech Connect

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  5. Lattice Independent Component Analysis for Mobile Robot Localization

    NASA Astrophysics Data System (ADS)

    Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz

    This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).

  6. Multivariate analysis of the volatile components in tobacco based on infrared-assisted extraction coupled to headspace solid-phase microextraction and gas chromatography-mass spectrometry.

    PubMed

    Yang, Yanqin; Pan, Yuanjiang; Zhou, Guojun; Chu, Guohai; Jiang, Jian; Yuan, Kailong; Xia, Qian; Cheng, Changhe

    2016-11-01

    A novel infrared-assisted extraction coupled to headspace solid-phase microextraction followed by gas chromatography with mass spectrometry method has been developed for the rapid determination of the volatile components in tobacco. The optimal extraction conditions for maximizing the extraction efficiency were as follows: 65 μm polydimethylsiloxane-divinylbenzene fiber, extraction time of 20 min, infrared power of 175 W, and distance between the infrared lamp and the headspace vial of 2 cm. Under the optimum conditions, 50 components were found to exist in all ten tobacco samples from different geographical origins. Compared with conventional water-bath heating and nonheating extraction methods, the extraction efficiency of infrared-assisted extraction was greatly improved. Furthermore, multivariate analysis including principal component analysis, hierarchical cluster analysis, and similarity analysis were performed to evaluate the chemical information of these samples and divided them into three classifications, including rich, moderate, and fresh flavors. The above-mentioned classification results were consistent with the sensory evaluation, which was pivotal and meaningful for tobacco discrimination. As a simple, fast, cost-effective, and highly efficient method, the infrared-assisted extraction coupled to headspace solid-phase microextraction technique is powerful and promising for distinguishing the geographical origins of the tobacco samples coupled to suitable chemometrics.

  7. Compound fault diagnosis of gearboxes based on GFT component extraction

    NASA Astrophysics Data System (ADS)

    Ou, Lu; Yu, Dejie

    2016-11-01

    Compound fault diagnosis of gearboxes is of great importance to the long-term safe operation of rotating machines, and the key is to separate different fault components. In this paper, the path graph is introduced into the vibration signal analysis and the graph Fourier transform (GFT) of vibration signals are investigated from the graph spectrum domain. To better extract the fault components in gearboxes, a new adjacency weight matrix is defined and then the GFT of simulation signals of the gear and the bearing with localized faults are analyzed. Further, since the GFT graph spectrum of the gear fault component and the bearing fault component are mainly distributed in the low-order region and the high-order region, respectively, a novel method for the compound fault diagnosis of gearboxes based on GFT component extraction is proposed. In this method, the nonzero ratios, which are introduced to analyze the eigenvectors auxiliary, and the GFT of a gearbox vibration signal, are firstly calculated. Then, the order thresholds for reconstructed fault components are determined and the fault components are extracted. Finally, the Hilbert demodulation analyses are conducted. According to the envelope spectra of the fault components, the faults of the gear and the bearing can be diagnosed respectively. The performance of the proposed method is validated by the simulation data and the experiment signals from a gearbox with compound faults.

  8. Multi-component separation and analysis of bat echolocation calls.

    PubMed

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  9. Component analysis of a school-based substance use prevention program in Spain: contributions of problem solving and social skills training content.

    PubMed

    Espada, José P; Griffin, Kenneth W; Pereira, Juan R; Orgilés, Mireia; García-Fernández, José M

    2012-02-01

    The objective of the present research was to examine the contribution of two intervention components, social skills training and problem solving training, to alcohol- and drug-related outcomes in a school-based substance use prevention program. Participants included 341 Spanish students from age 12 to 15 who received the prevention program Saluda in one of four experimental conditions: full program, social skills condition, problem solving condition, and a wait-list control group. Students completed self-report surveys at the pretest, posttest and 12-month follow-up assessments. Compared to the wait-list control group, the three intervention conditions produced reductions in alcohol use and intentions to use other substances. The intervention effect size for alcohol use was greatest in magnitude for the full program with all components. Problem-solving skills measured at the follow-up were strongest in the condition that received the full program with all components. We discuss the implications of these findings, including the advantages and disadvantages of implementing tailored interventions to students by selecting intervention components after a skills-based needs assessment.

  10. Recursive approach of EEG-segment-based principal component analysis substantially reduces cryogenic pump artifacts in simultaneous EEG-fMRI data.

    PubMed

    Kim, Hyun-Chul; Yoo, Seung-Schik; Lee, Jong-Hwan

    2015-01-01

    Electroencephalography (EEG) data simultaneously acquired with functional magnetic resonance imaging (fMRI) data are preprocessed to remove gradient artifacts (GAs) and ballistocardiographic artifacts (BCAs). Nonetheless, these data, especially in the gamma frequency range, can be contaminated by residual artifacts produced by mechanical vibrations in the MRI system, in particular the cryogenic pump that compresses and transports the helium that chills the magnet (the helium-pump). However, few options are available for the removal of helium-pump artifacts. In this study, we propose a recursive approach of EEG-segment-based principal component analysis (rsPCA) that enables the removal of these helium-pump artifacts. Using the rsPCA method, feature vectors representing helium-pump artifacts were successfully extracted as eigenvectors, and the reconstructed signals of the feature vectors were subsequently removed. A test using simultaneous EEG-fMRI data acquired from left-hand (LH) and right-hand (RH) clenching tasks performed by volunteers found that the proposed rsPCA method substantially reduced helium-pump artifacts in the EEG data and significantly enhanced task-related gamma band activity levels (p=0.0038 and 0.0363 for LH and RH tasks, respectively) in EEG data that have had GAs and BCAs removed. The spatial patterns of the fMRI data were estimated using a hemodynamic response function (HRF) modeled from the estimated gamma band activity in a general linear model (GLM) framework. Active voxel clusters were identified in the post-/pre-central gyri of motor area, only from the rsPCA method (uncorrected p<0.001 for both LH/RH tasks). In addition, the superior temporal pole areas were consistently observed (uncorrected p<0.001 for the LH task and uncorrected p<0.05 for the RH task) in the spatial patterns of the HRF model for gamma band activity when the task paradigm and movement were also included in the GLM.

  11. Resting-State Functional Connectivity by Independent Component Analysis-Based Markers Corresponds to Areas of Initial Seizure Propagation Established by Prior Modalities from the Hypothalamus

    PubMed Central

    Wilfong, Angus A.; Curry, Daniel J.

    2016-01-01

    Abstract The aims of this study were to evaluate a clinically practical functional connectivity (fc) protocol designed to blindly identify the corresponding areas of initial seizure propagation and also to differentiate these areas from remote secondary areas affected by seizure. The patients in this cohort had intractable epilepsy caused by intrahypothalamic hamartoma, which is the location of the ictal focus. The ictal propagation pathway is homogeneous and established, thus creating the optimum situation for the proposed method validation study. Twelve patients with seizures from hypothalamic hamartoma and six normal control patients underwent resting-state functional MRI, using independent component analysis (ICA) to identify network differences in patients. This was followed by seed-based connectivity measures to determine the extent of fc derangement between hypothalamus and these areas. The areas with significant change in connectivity were compared with the results of prior studies' modalities used to evaluate seizure propagation. The left amygdala-parahippocampal gyrus area, cingulate gyrus, and occipitotemporal gyrus demonstrated the highest derangement in connectivity with the hypothalamus, p < 0.01, corresponding to the initial seizure propagation areas established by prior modalities. Areas of secondary ictal propagation were differentiated from these initial locations by first being identified as an abnormal neuronal signal source through ICA, but did not show significant connectivity directly with the known ictal focus. Noninvasive connectivity measures correspond to areas of initial ictal propagation and differentiate such areas from secondary ictal propagation, which may aid in ictal focus surgical disconnection planning and support the use of this newer modality for adjunctive information in epilepsy surgery evaluation. PMID:27503346

  12. To explain the variation of OGTT dynamics by biological mechanisms: a novel approach based on principal components analysis in women with history of GDM.

    PubMed

    Göbl, Christian S; Bozkurt, Latife; Mittlböck, Martina; Leutner, Michael; Yarragudi, Rajashri; Tura, Andrea; Pacini, Giovanni; Kautzky-Willer, Alexandra

    2015-07-01

    Early reexamination of carbohydrate metabolism via an oral glucose tolerance test (OGTT) is recommended after pregnancy with gestational diabetes (GDM). In this report, we aimed to assess the dominant patterns of dynamic OGTT measurements and subsequently explain them by meanings of the underlying pathophysiological processes. Principal components analysis (PCA), a statistical procedure that aims to reduce the dimensionality of multiple interrelated measures to a set of linearly uncorrelated variables (the principal components) was performed on OGTT data of glucose, insulin and C-peptide in addition to age and body mass index (BMI) of 151 women (n = 110 females after GDM and n = 41 controls) at 3-6 mo after delivery. These components were explained by frequently sampled intravenous glucose tolerance test (FSIGT) parameters. Moreover, their relation with the later development of overt diabetes was studied. Three principal components (PC) were identified, which explained 71.5% of the variation of the original 17 variables. PC1 (explained 47.1%) was closely related to postprandial OGTT levels and FSIGT-derived insulin sensitivity (r = 0.68), indicating that it mirrors insulin sensitivity in the skeletal muscle. PC2 (explained 17.3%) and PC3 (explained 7.1%) were shown to be associated with β-cell failure and fasting (i.e., hepatic) insulin resistance, respectively. All three components were related with diabetes progression (occurred in n = 25 females after GDM) and showed significant changes in long-term trajectories. A high amount of the postpartum OGTT data is explained by principal components, representing pathophysiological mechanisms on the pathway of impaired carbohydrate metabolism. Our results improve our understanding of the underlying biological processes to provide an accurate postgestational risk stratification.

  13. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  14. Discriminant Multitaper Component Analysis of EEG

    NASA Astrophysics Data System (ADS)

    Dyrholm, Mads; Sajda, Paul

    2011-06-01

    This work extends Bilinear Discriminant Component Analysis to the case of oscillatory activity with allowed phase-variability across trials. The proposed method learns a spatial profile together with a multitaper basis which can integrate oscillatory power in a band-limited fashion. We demonstrate the method for predicting the handedness of a subject's button press given multivariate EEG data. We show that our method learns multitapers sensitive to oscillatory activity in the 8-12 Hz range with spatial filters selective for lateralized motor cortex. This finding is consistent with the well-known mu-rhythm, whose power is known to modulate as a function of which hand a subject plans to move, and thus is expected to be discriminative (predictive) of the subject's response.

  15. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  16. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle

  17. Differential-Private Data Publishing Through Component Analysis

    PubMed Central

    Jiang, Xiaoqian; Ji, Zhanglong; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Ohno-Machado, Lucila

    2013-01-01

    A reasonable compromise of privacy and utility exists at an “appropriate” resolution of the data. We proposed novel mechanisms to achieve privacy preserving data publishing (PPDP) satisfying ε-differential privacy with improved utility through component analysis. The mechanisms studied in this article are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The differential PCA-based PPDP serves as a general-purpose data dissemination tool that guarantees better utility (i.e., smaller error) compared to Laplacian and Exponential mechanisms using the same “privacy budget”. Our second mechanism, the differential LDA-based PPDP, favors data dissemination for classification purposes. Both mechanisms were compared with state-of-the-art methods to show performance differences. PMID:24409205

  18. Evaluation of ground water monitoring network by principal component analysis.

    PubMed

    Gangopadhyay, S; Gupta, A; Nachabe, M H

    2001-01-01

    Principal component analysis is a data reduction technique used to identify the important components or factors that explain most of the variance of a system. This technique was extended to evaluating a ground water monitoring network where the variables are monitoring wells. The objective was to identify monitoring wells that are important in predicting the dynamic variation in potentiometric head at a location. The technique is demonstrated through an application to the monitoring network of the Bangkok area. Principal component analysis was carried out for all the monitoring wells of the aquifer, and a ranking scheme based on the frequency of occurrence of a particular well as principal well was developed. The decision maker with budget constraints can now opt to monitor principal wells which can adequately capture the potentiometric head variation in the aquifer. This was evaluated by comparing the observed potentiometric head distribution using data from all available wells and wells selected using the ranking scheme as a guideline.

  19. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  20. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach.

  1. Incorporating principal component analysis into air quality ...

    EPA Pesticide Factsheets

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric – the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42−) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station–grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the prob

  2. Nonlinear independent component analysis and multivariate time series analysis

    NASA Astrophysics Data System (ADS)

    Storck, Jan; Deco, Gustavo

    1997-02-01

    We derive an information-theory-based unsupervised learning paradigm for nonlinear independent component analysis (NICA) with neural networks. We demonstrate that under the constraint of bounded and invertible output transfer functions the two main goals of unsupervised learning, redundancy reduction and maximization of the transmitted information between input and output (Infomax-principle), are equivalent. No assumptions are made concerning the kind of input and output distributions, i.e. the kind of nonlinearity of correlations. An adapted version of the general NICA network is used for the modeling of multivariate time series by unsupervised learning. Given time series of various observables of a dynamical system, our net learns their evolution in time by extracting statistical dependencies between past and present elements of the time series. Multivariate modeling is obtained by making present value of each time series statistically independent not only from their own past but also from the past of the other series. Therefore, in contrast to univariate methods, the information lying in the couplings between the observables is also used and a detection of higher-order cross correlations is possible. We apply our method to time series of the two-dimensional Hénon map and to experimental time series obtained from the measurements of axial velocities in different locations in weakly turbulent Taylor-Couette flow.

  3. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  4. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  5. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  6. Features of spatiotemporal groundwater head variation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Hsiao, Chin-Tsai; Chang, Liang-Cheng; Tsai, Jui-Pin; Chen, You-Cheng

    2017-04-01

    The effect of external stimuli on a groundwater system can be understood by examining the features of spatiotemporal head variations. However, the head variations caused by various external stimuli are mixed signals. To identify the stimuli features of head variations, we propose a systematic approach based on independent component analysis (ICA), frequency analysis, cross-correlation analysis, well-selection strategy, and hourly average head analysis. We also removed the head variations caused by regional stimuli (e.g., rainfall and river stage) from the original head variations of all the wells to better characterize the local stimuli features (e.g., pumping and tide). In the synthetic case study, the derived independent component (IC) features are more consistent with the features of the given recharge and pumping than the features derived from principle component analysis. In a real case study, the ICs associated with regional stimuli highly correlated with field observations, and the effect of regional stimuli on the head variation of all the wells was quantified. In addition, the tide, agricultural, industrial, and spring pumping features were characterized. Therefore, the developed method can facilitate understanding of the features of the spatiotemporal head variation and quantification of the effects of external stimuli on a groundwater system.

  7. Network component analysis: reconstruction of regulatory signals in biological systems.

    PubMed

    Liao, James C; Boscolo, Riccardo; Yang, Young-Lyeol; Tran, Linh My; Sabatti, Chiara; Roychowdhury, Vwani P

    2003-12-23

    High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifiability criteria. For networks that satisfy the criteria, the signals from the regulatory nodes and their strengths of influence on each output node can be faithfully reconstructed. This method is first validated experimentally by using the absorbance spectra of a network of various hemoglobin species. The method is then applied to microarray data generated from yeast Saccharamyces cerevisiae and the activities of various transcription factors during cell cycle are reconstructed by using recently discovered connectivity information for the underlying transcriptional regulatory networks.

  8. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  9. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  10. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  11. Analysis of failed nuclear plant components

    SciTech Connect

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  12. Independent Component Analysis of Nanomechanical Responses of Cantilever Arrays

    SciTech Connect

    Archibald, Richard K; Datskos, Panos G; Noid, Don W; Lavrik, Nickolay V

    2007-01-01

    The ability to detect and identify chemical and biological elements in air or liquid environments is of far reaching importance. Performing this task using technology that minimally impacts the perceived environment is the ultimate goal. The development of functionalized cantilever arrays with nanomechanical sensing is an important step towards this ambition. This report couples the feature extraction abilities of Independent Component Analysis (ICA) and the classification techniques of neural networks to analyze the signals produced by microcantilever-array-based nanomechanical sensors. The unique capabilities of this analysis unleash the potential of this sensing technology to accurately determine the identities and concentrations of the components of chemical mixtures. Furthermore, it is demonstrated that the knowledge of how the sensor array reacts to individual analytes in isolation is sufficient information to decode mixtures of analytes - a substantial benefit, significantly increasing the analytical utility of these sensing devices.

  13. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    NASA Astrophysics Data System (ADS)

    de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru

    2005-10-01

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  14. Microcalorimeter pulse analysis by means of principle component decomposition

    NASA Astrophysics Data System (ADS)

    de Vries, C. P.; Schouten, R. M.; van der Kuur, J.; Gottardi, L.; Akamatsu, H.

    2016-07-01

    The X-ray integral field unit for the Athena mission consists of a microcalorimeter transition edge sensor pixel array. Incoming photons generate pulses which are analyzed in terms of energy, in order to assemble the X-ray spectrum. Usually this is done by means of optimal filtering in either time or frequency domain. In this paper we investigate an alternative method by means of principle component analysis. This method attempts to find the main components of an orthogonal set of functions to describe the data. We show, based on simulations, what the influence of various instrumental effects is on this type of analysis. We compare analyses both in time and frequency domain. Finally we apply these analyses on real data, obtained via frequency domain multiplexing readout.

  15. Application of independent component analysis for beam diagnosis

    SciTech Connect

    Huang, X.; Lee, S.Y.; Prebys, Eric; Tomlin, Ray; /Fermilab

    2005-05-01

    The independent component analysis (ICA) is applied to analyze simultaneous multiple turn-by-turn beam position monitor (BPM) data of synchrotrons. The sampled data are decomposed to physically independent source signals, such as betatron motion, synchrotron motion and other perturbation sources. The decomposition is based on simultaneous diagonalization of several unequal time covariance matrices, unlike the model independent analysis (MIA), which uses equal-time covariance matrix only. Consequently the new method has advantage over MIA in isolating the independent modes and is more robust under the influence of contaminating signals of bad BPMs. The spatial pattern and temporal pattern of each resulting component (mode) can be used to identify and analyze the associated physical cause. Beam optics can be studied on the basis of the betatron modes. The method has been successfully applied to the Booster Synchrotron at Fermilab.

  16. Improved dependent component analysis for hyperspectral unmixing with spatial correlations

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Wan, Jianwei; Huang, Bingchao; Lan, Tian

    2014-11-01

    In highly mixed hyerspectral datasets, dependent component analysis (DECA) has shown its superiority over other traditional geometric based algorithms. This paper proposes a new algorithm that incorporates DECA with the infinite hidden Markov random field (iHMRF) model, which can efficiently exploit spatial dependencies between image pixels and automatically determine the number of classes. Expectation Maximization algorithm is derived to infer the model parameters, including the endmembers, the abundances, the dirichlet distribution parameters of each class and the classification map. Experimental results based on synthetic and real hyperspectral data show the effectiveness of the proposed algorithm.

  17. Autonomous radar pulse modulation classification using modulation components analysis

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Qiu, Zhaoyang; Zhu, Jun; Tang, Bin

    2016-12-01

    An autonomous method for recognizing radar pulse modulations based on modulation components analysis is introduced in this paper. Unlike the conventional automatic modulation classification methods which extract modulation features based on a list of known patterns, this proposed method classifies modulations by the existence of basic modulation components including continuous frequency modulations, discrete frequency codes and discrete phase codes in an autonomous way. A feasible way to realize this method is using the features of abrupt changes in the instantaneous frequency rate curve which derived by the short-term general representation of phase derivative. This method is suitable not only for the basic radar modulations but also for complicated and hybrid modulations. The theoretical result and two experiments demonstrate the effectiveness of the proposed method.

  18. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  19. Principal component analysis on chemical abundances spaces

    NASA Astrophysics Data System (ADS)

    Ting, Yuan-Sen; Freeman, Kenneth C.; Kobayashi, Chiaki; De Silva, Gayandhi M.; Bland-Hawthorn, Joss

    2012-04-01

    In preparation for the High Efficiency and Resolution Multi-Element Spectrograph (HERMES) chemical tagging survey of about a million Galactic FGK stars, we estimate the number of independent dimensions of the space defined by the stellar chemical element abundances [X/Fe]. This leads to a way to study the origin of elements from observed chemical abundances using principal component analysis. We explore abundances in several environments, including solar neighbourhood thin/thick disc stars, halo metal-poor stars, globular clusters, open clusters, the Large Magellanic Cloud and the Fornax dwarf spheroidal galaxy. By studying solar-neighbourhood stars, we confirm the universality of the r-process that tends to produce [neutron-capture elements/Fe] in a constant ratio. We find that, especially at low metallicity, the production of r-process elements is likely to be associated with the production of α-elements. This may support the core-collapse supernovae as the r-process site. We also verify the overabundances of light s-process elements at low metallicity, and find that the relative contribution decreases at higher metallicity, which suggests that this lighter elements primary process may be associated with massive stars. We also verify the contribution from the s-process in low-mass asymptotic giant branch (AGB) stars at high metallicity. Our analysis reveals two types of core-collapse supernovae: one produces mainly α-elements, the other produces both α-elements and Fe-peak elements with a large enhancement of heavy Fe-peak elements which may be the contribution from hypernovae. Excluding light elements that may be subject to internal mixing, K and Cu, we find that the [X/Fe] chemical abundance space in the solar neighbourhood has about six independent dimensions both at low metallicity (-3.5 ≲ [Fe/H] ≲-2) and high metallicity ([Fe/H] ≳-1). However the dimensions come from very different origins in these two cases. The extra contribution from low-mass AGB

  20. Distributed Principal Component Analysis for Wireless Sensor Networks

    PubMed Central

    Le Borgne, Yann-Aël; Raybaud, Sylvain; Bontempi, Gianluca

    2008-01-01

    The Principal Component Analysis (PCA) is a data dimensionality reduction tech-nique well-suited for processing data from sensor networks. It can be applied to tasks like compression, event detection, and event recognition. This technique is based on a linear trans-form where the sensor measurements are projected on a set of principal components. When sensor measurements are correlated, a small set of principal components can explain most of the measurements variability. This allows to significantly decrease the amount of radio communication and of energy consumption. In this paper, we show that the power iteration method can be distributed in a sensor network in order to compute an approximation of the principal components. The proposed implementation relies on an aggregation service, which has recently been shown to provide a suitable framework for distributing the computation of a linear transform within a sensor network. We also extend this previous work by providing a detailed analysis of the computational, memory, and communication costs involved. A com-pression experiment involving real data validates the algorithm and illustrates the tradeoffs between accuracy and communication costs. PMID:27873788

  1. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  2. Agile Objects: Component-Based Inherent Survivability

    DTIC Science & Technology

    2003-12-01

    We design, implement, and develop a component middleware system which enables online application reconfiguration to enhance application...Objects project has accomplished a basic proof of concept of the key project ideas showing working systems that embody location independence and online ...migration, open real-time structures and pre-allocation of resources to enable rapid migration, online interface mutation for elusive interfaces, and

  3. Bioactivity of beaver castoreum constituents using principal components analysis.

    PubMed

    Schulte, B A; Müller-Schwarze, D; Tang, R; Webster, F X

    1995-07-01

    North American beaver (Castor canadensis) were observed to sniff from the water and make land visits to some synthetic chemical components of castoreum placed on experimental scent mounds (ESM). In previous analysis, the elicitation (presence/absence), completeness, and/or strength (number, duration) of these key responses served as separate measures of biological activity. In this paper, we used principal components analysis (PCA) to combine linearly six related measures of observed response and one index of overnight visitation calculated over all trials. The first principal component accounted for a majority of the variation and allowed ranking of the samples based on their composite bioactivity. A second PCA, based only on response trials (excluding trials with no responses), showed that responses to the synthetic samples, once elicited, did not vary greatly in completeness or strength. None of the samples evoked responses as complete or strong as the castoreum control. Castoreum also elicited more multiple land visits (repeated visits to the ESM by the same individual or by more than one family member) than the synthetic samples, indicating that an understanding of the castoreum chemosignal requires consideration of responses by the family unit, and not just the land visit by the initial responder.

  4. Robust principal component analysis in water quality index development

    NASA Astrophysics Data System (ADS)

    Ali, Zalina Mohd; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan

    2014-06-01

    Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.

  5. Quantitative analysis of planetary reflectance spectra with principal components analysis

    NASA Technical Reports Server (NTRS)

    Johnson, P. E.; Smith, M. O.; Adams, J. B.

    1985-01-01

    A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene. These endmembers were chosen from a similar analysis of 92 lunar soil and rock samples. The models fit the data to within 2 percent rms. It is found that the goodness of fit is marginally better for intimate mixing over macroscopic mixing.

  6. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  7. Two-component signal transduction in Agaricus bisporus: a comparative genomic analysis with other basidiomycetes through the web-based tool BASID2CS.

    PubMed

    Lavín, José L; García-Yoldi, Alberto; Ramírez, Lucía; Pisabarro, Antonio G; Oguiza, José A

    2013-06-01

    Two-component systems (TCSs) are signal transduction mechanisms present in many eukaryotes, including fungi that play essential roles in the regulation of several cellular functions and responses. In this study, we carry out a genomic analysis of the TCS proteins in two varieties of the white button mushroom Agaricus bisporus. The genomes of both A. bisporus varieties contain eight genes coding for TCS proteins, which include four hybrid Histidine Kinases (HKs), a single histidine-containing phosphotransfer (HPt) protein and three Response Regulators (RRs). Comparison of the TCS proteins among A. bisporus and the sequenced basidiomycetes showed a conserved core complement of five TCS proteins including the Tco1/Nik1 hybrid HK, HPt protein and Ssk1, Skn7 and Rim15-like RRs. In addition, Dual-HKs, unusual hybrid HKs with 2 HK and 2 RR domains, are absent in A. bisporus and are limited to various species of basidiomycetes. Differential expression analysis showed no significant up- or down-regulation of the Agaricus TCS genes in the conditions/tissue analyzed with the exception of the Skn7-like RR gene (Agabi_varbisH97_2|198669) that is significantly up-regulated on compost compared to cultured mycelia. Furthermore, the pipeline web server BASID2CS (http://bioinformatics.unavarra.es:1000/B2CS/BASID2CS.htm) has been specifically designed for the identification, classification and functional annotation of putative TCS proteins from any predicted proteome of basidiomycetes using a combination of several bioinformatic approaches.

  8. Principal-component-based population structure adjustment in the North American Rheumatoid Arthritis Consortium data: impact of single-nucleotide polymorphism set and analysis method

    PubMed Central

    2009-01-01

    Population structure occurs when a sample is composed of individuals with different ancestries and can result in excess type I error in genome-wide association studies. Genome-wide principal-component analysis (PCA) has become a popular method for identifying and adjusting for subtle population structure in association studies. Using the Genetic Analysis Workshop 16 (GAW16) NARAC data, we explore two unresolved issues concerning the use of genome-wide PCA to account for population structure in genetic associations studies: the choice of single-nucleotide polymorphism (SNP) subset and the choice of adjustment model. We computed PCs for subsets of genome-wide SNPs with varying levels of LD. The first two PCs were similar for all subsets and the first three PCs were associated with case status for all subsets. When the PCs associated with case status were included as covariates in an association model, the reduction in genomic inflation factor was similar for all SNP sets. Several models have been proposed to account for structure using PCs, but it is not yet clear whether the different methods will result in substantively different results for association studies with individuals of European descent. We compared genome-wide association p-values and results for two positive-control SNPs previously associated with rheumatoid arthritis using four PC adjustment methods as well as no adjustment and genomic control. We found that in this sample, adjusting for the continuous PCs or adjusting for discrete clusters identified using the PCs adequately accounts for the case-control population structure, but that a recently proposed randomization test performs poorly. PMID:20017972

  9. Using surface electromyography (SEMG) to classify low back pain based on lifting capacity evaluation with principal component analysis neural network method.

    PubMed

    Hung, Chia-Chun; Shen, Tsu-Wang; Liang, Chung-Chao; Wu, Wen-Tien

    2014-01-01

    Low back pain (LBP) is a leading cause of disability. The population with low back pain is continuously growing in the recent years. This study tries to distinguish LBP patients with healthy subjects by using the objective surface electromyography (SEMG) as a quantitative score for clinical evaluations. There are 26 healthy and 26 low back pain subjects who involved in this research. They lifted different weights by static and dynamic lifting process. Multiple features are extracted from the raw SEMG data, including energy and frequency indexes. Moreover, false discovery rate (FDR) omitted the false positive features. Then, a principal component analysis neural network (PCANN) was used for classifications. The results showed the features with different loadings (including 30%, and 50% loading) on lifting which can be used for distinguishing healthy and back pain subjects. By using PCANN method, more than 80% accuracies are achieved when different lifting weights were applied. Moreover, it is correlated between some EMG features and clinical scales, on exertion, fatigue, and pain. This technology can be potentially used for the future researches as a computer-aid diagnosis tool of LBP evaluation.

  10. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children

    PubMed Central

    Wassenburg, Stephanie I.; de Koning, Björn B.; de Vries, Meinou H.; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  11. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension.

  12. Experimental system and component performance analysis

    SciTech Connect

    Peterman, K.

    1984-10-01

    A prototype dye laser flow loop was constructed to flow test large power amplifiers in Building 169. The flow loop is designed to operate at supply pressures up to 900 psig and flow rates up to 250 GPM. During the initial startup of the flow loop experimental measurements were made to evaluate component and system performance. Three candidate dye flow loop pumps and three different pulsation dampeners were tested.

  13. Image denoising using principal component analysis in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bacchelli, Silvia; Papi, Serena

    2006-05-01

    In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.

  14. Study on failure analysis of array chip components in IRFPA

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaonan; He, Yingjie; Li, Jinping

    2016-10-01

    Infrared focal plane array detector has advantages of strong anti-interference ability and high sensitivity. Its size, weight and power dissipation has been noticeably decreased compared to the conventional infrared imaging system. With the development of the detector manufacture technology and the cost reduction, IRFPA detector has been widely used in the military and commercial fields. Due to the restricting of array chip manufacturing process and material defects, the fault phenomenon such as cracking, bad pixel and abnormal output was showed during the test, which restricts the performance of the infrared detector imaging system, and these effects are gradually intensified with the expanding of the focal plane array size and the shrinking of the pixel size. Based on the analysis of the test results for the infrared detector array chip components, the fault phenomenon was classified. The main cause of the chip component failure is chip cracking, bad pixel and abnormal output. The reason of the failure has been analyzed deeply. According to analyze the mechanism of the failure, a series of measures which contain filtrating materials and optimizing the manufacturing process of array chip components were used to improve the performance of the chip components and the test pass rate, which is used to meet the needs of the detector performance.

  15. Component outage data analysis methods. Volume 2: Basic statistical methods

    NASA Astrophysics Data System (ADS)

    Marshall, J. A.; Mazumdar, M.; McCutchan, D. A.

    1981-08-01

    Statistical methods for analyzing outage data on major power system components such as generating units, transmission lines, and transformers are identified. The analysis methods produce outage statistics from component failure and repair data that help in understanding the failure causes and failure modes of various types of components. Methods for forecasting outage statistics for those components used in the evaluation of system reliability are emphasized.

  16. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  17. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  18. Columbia River Component Data Gap Analysis

    SciTech Connect

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  19. Si-based RF MEMS components.

    SciTech Connect

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  20. Direct Numerical Simulation of Combustion Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Owoyele, Opeoluwa; Echekki, Tarek

    2016-11-01

    We investigate the potential of accelerating chemistry integration during the direct numerical simulation (DNS) of complex fuels based on the transport equations of representative scalars that span the desired composition space using principal component analysis (PCA). The transported principal components (PCs) offer significant potential to reduce the computational cost of DNS through a reduction in the number of transported scalars, as well as the spatial and temporal resolution requirements. The strategy is demonstrated using DNS of a premixed methane-air flame in a 2D vortical flow and is extended to the 3D geometry to further demonstrate the computational efficiency of PC transport. The PCs are derived from a priori PCA of a subset of the full thermo-chemical scalars' vector. The PCs' chemical source terms and transport properties are constructed and tabulated in terms of the PCs using artificial neural networks (ANN). Comparison of DNS based on a full thermo-chemical state and DNS based on PC transport based on 6 PCs shows excellent agreement even for species that are not included in the PCA reduction. The transported PCs reproduce some of the salient features of strongly curved and strongly strained flames. The 2D DNS results also show a significant reduction of two orders of magnitude in the computational cost of the simulations, which enables an extension of the PCA approach to 3D DNS under similar computational requirements. This work was supported by the National Science Foundation Grant DMS-1217200.

  1. Core bioactive components promoting blood circulation in the traditional Chinese medicine compound xueshuantong capsule (CXC) based on the relevance analysis between chemical HPLC fingerprint and in vivo biological effects.

    PubMed

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  2. Core Bioactive Components Promoting Blood Circulation in the Traditional Chinese Medicine Compound Xueshuantong Capsule (CXC) Based on the Relevance Analysis between Chemical HPLC Fingerprint and In Vivo Biological Effects

    PubMed Central

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  3. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  4. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  5. Balancing generality and specificity in component-based reuse

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Beck, Jon

    1992-01-01

    For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.

  6. Application of independent component analysis in face images: a survey

    NASA Astrophysics Data System (ADS)

    Huang, Yuchi; Lu, Hanqing

    2003-09-01

    Face technologies which can be applied to access control and surveillance, are essential to intelligent vision-based human computer interaction. The research efforts in this field include face detecting, face recognition, face retrieval, etc. However, these tasks are challenging because of variability in view point, lighting, pose and expression of human faces. The ideal face representation should consider the variability so as to we can develop robust algorithms for our applications. Independent Component Analysis (ICA) as an unsupervised learning technique has been used to find such a representation and obtained good performances in some applications. In the first part of this paper, we depict the models of ICA and its extensions: Independent Subspace Analysis (ISA) and Topographic ICA (TICA).Then we summaraize the process in the applications of ICA and its extension in Face images. At last we propose a promising direction for future research.

  7. Power analysis of principal components regression in genetic association studies.

    PubMed

    Shen, Yan-feng; Zhu, Jun

    2009-10-01

    Association analysis provides an opportunity to find genetic variants underlying complex traits. A principal components regression (PCR)-based approach was shown to outperform some competing approaches. However, a limitation of this method is that the principal components (PCs) selected from single nucleotide polymorphisms (SNPs) may be unrelated to the phenotype. In this article, we investigate the theoretical properties of such a method in more detail. We first derive the exact power function of the test based on PCR, and hence clarify the relationship between the test power and the degrees of freedom (DF). Next, we extend the PCR test to a general weighted PCs test, which provides a unified framework for understanding the properties of some related statistics. We then compare the performance of these tests. We also introduce several data-driven adaptive alternatives to overcome difficulties in the PCR approach. Finally, we illustrate our results using simulations based on real genotype data. Simulation study shows the risk of using the unsupervised rule to determine the number of PCs, and demonstrates that there is no single uniformly powerful method for detecting genetic variants.

  8. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  9. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  10. Advances in resonance based NDT for ceramic components

    NASA Astrophysics Data System (ADS)

    Hunter, L. J.; Jauriqui, L. M.; Gatewood, G. D.; Sisneros, R.

    2012-05-01

    The application of resonance based non-destructive testing methods has been providing benefit to manufacturers of metal components in the automotive and aerospace industries for many years. Recent developments in resonance based technologies are now allowing the application of resonance NDT to ceramic components including turbine engine components, armor, and hybrid bearing rolling elements. Application of higher frequencies and advanced signal interpretation are now allowing Process Compensated Resonance Testing to detect both internal material defects and surface breaking cracks in a variety of ceramic components. Resonance techniques can also be applied to determine material properties of coupons and to evaluate process capability for new manufacturing methods.

  11. Analysis of nuclear power plant component failures

    SciTech Connect

    Not Available

    1984-01-01

    Items are shown that have caused 90% of the nuclear unit outages and/or deratings between 1971 and 1980 and the magnitude of the problem indicated by an estimate of power replacement cost when the units are out of service or derated. The funding EPRI has provided on these specific items for R and D and technology transfer in the past and the funding planned in the future (1982 to 1986) are shown. EPRI's R and D may help the utilities on only a small part of their nuclear unit outage problems. For example, refueling is the major cause for nuclear unit outages or deratings and the steam turbine is the second major cause for nuclear unit outages; however, these two items have been ranked fairly low on the EPRI priority list for R and D funding. Other items such as nuclear safety (NRC requirements), reactor general, reactor and safety valves and piping, and reactor fuel appear to be receiving more priority than is necessary as determined by analysis of nuclear unit outage causes.

  12. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  13. [Royal jelly: component efficiency, analysis, and standardisation].

    PubMed

    Oršolić, Nada

    2013-09-01

    Royal jelly is a viscous substance secreted by the hypopharyngeal and mandibular glands of worker honeybees (Apis mellifera) that contains a considerable amount of proteins, free amino acids, lipids, vitamins, sugars, and bioactive substances such as 10-hydroxy-trans-2-decenoic acid, antibacterial protein, and 350-kDa protein. These properties make it an attractive ingredient in various types of healthy foods. This article brings a brief review of the molecular mechanisms involved in the development of certain disorders that can be remedied by royal jelly, based on a selection of in vivo and in vitro studies. It also describes current understanding of the mechanisms and beneficial effects by which royal jelly helps to combat aging-related complications. Royal jelly has been reported to exhibit beneficial physiological and pharmacological effects in mammals, including vasodilative and hypotensive activities, antihypercholesterolemic activity, and antitumor activity. As its composition varies significantly (for both fresh and dehydrated samples), the article brings a few recommendations for defining new quality standards.

  14. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  15. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result.

  16. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  17. Key components of financial-analysis education for clinical nurses.

    PubMed

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses.

  18. Three component microseism analysis in Australia from deconvolution enhanced beamforming

    NASA Astrophysics Data System (ADS)

    Gal, Martin; Reading, Anya; Ellingsen, Simon; Koper, Keith; Burlacu, Relu; Tkalčić, Hrvoje; Gibbons, Steven

    2016-04-01

    Ocean induced microseisms in the range 2-10 seconds are generated in deep oceans and near coastal regions as body and surface waves. The generation of these waves can take place over an extended area and in a variety of geographical locations at the same time. It is therefore common to observe multiple arrivals with a variety of slowness vectors which leads to the desire to measure multiple arrivals accurately. We present a deconvolution enhanced direction of arrival algorithm, for single and 3 component arrays, based on CLEAN. The algorithm iteratively removes sidelobe contributions in the power spectrum, therefore improves the signal-to-noise ratio of weaker sources. The power level on each component (vertical, radial and transverse) can be accurately estimated as the beamformer decomposes the power spectrum into point sources. We first apply the CLEAN aided beamformer to synthetic data to show its performance under known conditions and then evaluate real (observed) data from a range of arrays with apertures between 10 and 70 km (ASAR, WRA and NORSAR) to showcase the improvement in resolution. We further give a detailed analysis of the 3 component wavefield in Australia including source locations, power levels, phase ratios, etc. by two spiral arrays (PSAR and SQspa). For PSAR the analysis is carried out in the frequency range 0.35-1Hz. We find LQ, Lg and fundamental and higher mode Rg wave phases. Additionally, we also observe the Sn phase. This is the first time this has been achieved through beamforming on microseism noise and underlines the potential for extra seismological information that can be extracted using the new implementation of CLEAN. The fundamental mode Rg waves are dominant in power for low frequencies and show equal power levels with LQ towards higher frequencies. Generation locations between Rg and LQ are mildly correlated for low frequencies and uncorrelated for higher frequencies. Results from SQspa will discuss lower frequencies around the

  19. Arthropod surveillance programs: Basic components, strategies, and analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthro...

  20. Principal component analysis of minimal excitatory postsynaptic potentials.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1998-02-20

    'Minimal' excitatory postsynaptic potentials (EPSPs) are often recorded from central neurones, specifically for quantal analysis. However the EPSPs may emerge from activation of several fibres or transmission sites so that formal quantal analysis may give false results. Here we extended application of the principal component analysis (PCA) to minimal EPSPs. We tested a PCA algorithm and a new graphical 'alignment' procedure against both simulated data and hippocampal EPSPs. Minimal EPSPs were recorded before and up to 3.5 h following induction of long-term potentiation (LTP) in CA1 neurones. In 29 out of 45 EPSPs, two (N=22) or three (N=7) components were detected which differed in latencies, rise time (Trise) or both. The detected differences ranged from 0.6 to 7.8 ms for the latency and from 1.6-9 ms for Trise. Different components behaved differently following LTP induction. Cases were found when one component was potentiated immediately after tetanus whereas the other with a delay of 15-60 min. The immediately potentiated component could decline in 1-2 h so that the two components contributed differently into early (< 1 h) LTP1 and later (1-4 h) LTP2 phases. The noise deconvolution techniques was applied to both conventional EPSP amplitudes and scores of separate components. Cases are illustrated when quantal size (upsilon) estimated from the EPSP amplitudes increased whereas upsilon estimated from the component scores was stable during LTP1. Analysis of component scores could show apparent double-fold increases in upsilon which are interpreted as reflections of synchronized quantal releases. In general, the results demonstrate PCA applicability to separate EPSPs into different components and its usefulness for precise analysis of synaptic transmission.

  1. Integrated Analysis of Piezoelectric Resonators as Components of Electronic Systems

    DTIC Science & Technology

    2015-09-07

    design cycle of a military electronic system. Our experiment and theoretical modeling results so far 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE...2014 Approved for Public Release; Distribution Unlimited Final Report: Integrated Analysis of Piezoelectric Resonators as Components of Electronic ...31-Aug-2014 ABSTRACT Final Report: Integrated Analysis of Piezoelectric Resonators as Components of Electronic Systems Report Title The goal of this

  2. Array Independent Component Analysis with Application to Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kukuyeva, Irina A.

    2012-11-01

    There are three ways to learn about an object: from samples taken directly from the site, from simulation studies based on its known scientific properties, or from remote sensing images. All three are carried out to study Earth and Mars. Our goal, however, is to learn about the second largest storm on Jupiter, called the White Oval, whose characteristics are unknown to this day. As Jupiter is a gas giant and hundreds of millions of miles away from Earth, we can only make inferences about the planet from retrieval algorithms and remotely sensed images. Our focus is to find latent variables from the remotely sensed data that best explain its underlying atmospheric structure. Principal Component Analysis (PCA) is currently the most commonly employed technique to do so. For a data set with more than two modes, this approach fails to account for all of the variable interactions, especially if the distribution of the variables is not multivariate normal; an assumption that is rarely true of multispectral images. The thesis presents an overview of PCA along with the most commonly employed decompositions in other fields: Independent Component Analysis, Tucker-3 and CANDECOMP/PARAFAC and discusses their limitations in finding unobserved, independent structures in a data cube. We motivate the need for a novel dimension reduction technique that generalizes existing decompositions to find latent, statistically independent variables for one side of a multimodal (number of modes greater than two) data set while accounting for the variable interactions with its other modes. Our method is called Array Independent Component Analysis (AICA). As the main question of any decomposition is how to select a small number of latent variables that best capture the structure in the data, we extend the heuristic developed by Ceulemans and Kiers in [10] to aid in model selection for the AICA framework. The effectiveness of each dimension reduction technique is determined by the degree of

  3. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    SciTech Connect

    Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  4. Transmission of mechanical stresses within the cytoskeleton of adherent cells: a theoretical analysis based on a multi-component cell model.

    PubMed

    Tracqui, Philippe; Ohayon, Jacques

    2004-01-01

    How environmental mechanical forces affect cellular functions is a central problem in cell biology. Theoretical models of cellular biomechanics provide relevant tools for understanding how the contributions of deformable intracellular components and specific adhesion conditions at the cell interface are integrated for determining the overall balance of mechanical forces within the cell. We investigate here the spatial distributions of intracellular stresses when adherent cells are probed by magnetic twisting cytometry. The influence of the cell nucleus stiffness on the simulated nonlinear torque-bead rotation response is analyzed by considering a finite element multi-component cell model in which the cell and its nucleus are considered as different hyperelastic materials. We additionally take into account the mechanical properties of the basal cell cortex, which can be affected by the interaction of the basal cell membrane with the extracellular substrate. In agreement with data obtained on epithelial cells, the simulated behaviour of the cell model relates the hyperelastic response observed at the entire cell scale to the distribution of stresses and strains within the nucleus and the cytoskeleton, up to cell adhesion areas. These results, which indicate how mechanical forces are transmitted at distant points through the cytoskeleton, are compared to recent data imaging the highly localized distribution of intracellular stresses.

  5. Independent Component Analysis to Detect Clustered Microcalcification Breast Cancers

    PubMed Central

    Gallardo-Caballero, R.; García-Orellana, C. J.; García-Manso, A.; González-Velasco, H. M.; Macías-Macías, M.

    2012-01-01

    The presence of clustered microcalcifications is one of the earliest signs in breast cancer detection. Although there exist many studies broaching this problem, most of them are nonreproducible due to the use of proprietary image datasets. We use a known subset of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM), to develop a computer-aided detection system that outperforms the current reproducible studies on the same mammogram set. This proposal is mainly based on the use of extracted image features obtained by independent component analysis, but we also study the inclusion of the patient's age as a nonimage feature which requires no human expertise. Our system achieves an average of 2.55 false positives per image at a sensitivity of 81.8% and 4.45 at a sensitivity of 91.8% in diagnosing the BCRP_CALC_1 subset of DDSM. PMID:22654626

  6. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  7. Using Dynamic Master Logic Diagram for component partial failure analysis

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    A methodology using the Dynamic Master Logic Diagram (DMLD) for the evaluation of component partial failure is presented. Since past PRAs have not focused on partial failure effects, the reliability of components are only based on the binary state assumption, i.e. defining a component as fully failed or functioning. This paper is to develop an approach to predict and estimate the component partial failure on the basis of the fuzzy state assumption. One example of the application of this methodology with the reliability function diagram of a centrifugal pump is presented.

  8. Estimation of individual evoked potential components using iterative independent component analysis.

    PubMed

    Zouridakis, G; Iyer, D; Diaz, J; Patidar, U

    2007-09-07

    Independent component analysis (ICA) has been successfully employed in the study of single-trial evoked potentials (EPs). In this paper, we present an iterative temporal ICA methodology that processes multielectrode single-trial EPs, one channel at a time, in contrast to most existing methodologies which are spatial and analyze EPs from all recording channels simultaneously. The proposed algorithm aims at enhancing individual components in an EP waveform in each single trial, and relies on a dynamic template to guide EP estimation. To quantify the performance of this method, we carried out extensive analyses with artificial EPs, using different models for EP generation, including the phase-resetting and the classical additive-signal models, and several signal-to-noise ratios and EP component latency jitters. Furthermore, to validate the technique, we employed actual recordings of the auditory N100 component obtained from normal subjects. Our results with artificial data show that the proposed procedure can provide significantly better estimates of the embedded EP signals compared to plain averaging, while with actual EP recordings, the procedure can consistently enhance individual components in single trials, in all subjects, which in turn results in enhanced average EPs. This procedure is well suited for fast analysis of very large multielectrode recordings in parallel architectures, as individual channels can be processed simultaneously on different processors. We conclude that this method can be used to study the spatiotemporal evolution of specific EP components and may have a significant impact as a clinical tool in the analysis of single-trial EPs.

  9. Principal Component Analysis for pattern recognition in volcano seismic spectra

    NASA Astrophysics Data System (ADS)

    Unglert, Katharina; Jellinek, A. Mark

    2016-04-01

    Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.

  10. Principal Component Analysis and Cluster Analysis in Profile of Electrical System

    NASA Astrophysics Data System (ADS)

    Iswan; Garniwa, I.

    2017-03-01

    This paper propose to present approach for profile of electrical system, presented approach is combination algorithm, namely principal component analysis (PCA) and cluster analysis. Based on relevant data of gross domestic regional product and electric power and energy use. This profile is set up to show the condition of electrical system of the region, that will be used as a policy in the electrical system of spatial development in the future. This paper consider 24 region in South Sulawesi province as profile center points and use principal component analysis (PCA) to asses the regional profile for development. Cluster analysis is used to group these region into few cluster according to the new variable be produced PCA. The general planning of electrical system of South Sulawesi province can provide support for policy making of electrical system development. The future research can be added several variable into existing variable.

  11. Blind Extraction of an Exoplanetary Spectrum through Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.; Tinetti, G.; Deroo, P.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J.

    2013-03-01

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a "blind" analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of ~0.09 μm. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  12. Quantitative Analysis of Porosity and Transport Properties by FIB-SEM 3D Imaging of a Solder Based Sintered Silver for a New Microelectronic Component

    NASA Astrophysics Data System (ADS)

    Rmili, W.; Vivet, N.; Chupin, S.; Le Bihan, T.; Le Quilliec, G.; Richard, C.

    2016-04-01

    As part of development of a new assembly technology to achieve bonding for an innovative silicon carbide (SiC) power device used in harsh environments, the aim of this study is to compare two silver sintering profiles and then to define the best candidate for die attach material for this new component. To achieve this goal, the solder joints have been characterized in terms of porosity by determination of the morphological characteristics of the material heterogeneities and estimating their thermal and electrical transport properties. The three dimensional (3D) microstructure of sintered silver samples has been reconstructed using a focused ion beam scanning electron microscope (FIB-SEM) tomography technique. The sample preparation and the experimental milling and imaging parameters have been optimized in order to obtain a high quality of 3D reconstruction. Volume fractions and volumetric connectivity of the individual phases (silver and voids) have been determined. Effective thermal and electrical conductivities of the samples and the tortuosity of the silver phase have been also evaluated by solving the diffusive transport equation.

  13. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    ERIC Educational Resources Information Center

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  14. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  15. Using Independent Component Analysis to Separate Signals in Climate Data

    SciTech Connect

    Fodor, I K; Kamath, C

    2003-01-28

    Global temperature series have contributions from different sources, such as volcanic eruptions and El Nino Southern Oscillation variations. We investigate independent component analysis as a technique to separate unrelated sources present in such series. We first use artificial data, with known independent components, to study the conditions under which ICA can separate the individual sources. We then illustrate the method with climate data from the National Centers for Environmental Prediction.

  16. Exploration of shape variation using localized components analysis.

    PubMed

    Alcantara, Dan A; Carmichael, Owen; Harcourt-Smith, Will; Sterner, Kirstin; Frost, Stephen R; Dutton, Rebecca; Thompson, Paul; Delson, Eric; Amenta, Nina

    2009-08-01

    Localized Components Analysis (LoCA) is a new method for describing surface shape variation in an ensemble of objects using a linear subspace of spatially localized shape components. In contrast to earlier methods, LoCA optimizes explicitly for localized components and allows a flexible trade-off between localized and concise representations, and the formulation of locality is flexible enough to incorporate properties such as symmetry. This paper demonstrates that LoCA can provide intuitive presentations of shape differences associated with sex, disease state, and species in a broad range of biomedical specimens, including human brain regions and monkey crania.

  17. Computer compensation for NMR quantitative analysis of trace components

    SciTech Connect

    Nakayama, T.; Fujiwara, Y.

    1981-07-22

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA.

  18. Bonding and Integration Technologies for Silicon Carbide Based Injector Components

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.; Singh, Mrityunjay

    2008-01-01

    Advanced ceramic bonding and integration technologies play a critical role in the fabrication and application of silicon carbide based components for a number of aerospace and ground based applications. One such application is a lean direct injector for a turbine engine to achieve low NOx emissions. Ceramic to ceramic diffusion bonding and ceramic to metal brazing technologies are being developed for this injector application. For the diffusion bonding, titanium interlayers (PVD and foils) were used to aid in the joining of silicon carbide (SiC) substrates. The influence of such variables as surface finish, interlayer thickness (10, 20, and 50 microns), processing time and temperature, and cooling rates were investigated. Microprobe analysis was used to identify the phases in the bonded region. For bonds that were not fully reacted an intermediate phase, Ti5Si3Cx, formed that is thermally incompatible in its thermal expansion and caused thermal stresses and cracking during the processing cool-down. Thinner titanium interlayers and/or longer processing times resulted in stable and compatible phases that did not contribute to microcracking and resulted in an optimized microstructure. Tensile tests on the joined materials resulted in strengths of 13-28 MPa depending on the SiC substrate material. Non-destructive evaluation using ultrasonic immersion showed well formed bonds. For the joining technology of brazing Kovar fuel tubes to silicon carbide, preliminary development of the joining approach has begun. Various technical issues and requirements for the injector application are addressed.

  19. Critical Components of Effective School-Based Feeding Improvement Programs

    ERIC Educational Resources Information Center

    Bailey, Rita L.; Angell, Maureen E.

    2004-01-01

    This article identifies critical components of effective school-based feeding improvement programs for students with feeding problems. A distinction is made between typical school-based feeding management and feeding improvement programs, where feeding, independent functioning, and mealtime behaviors are the focus of therapeutic strategies.…

  20. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  1. Abstract Interfaces for Data Analysis - Component Architecture for Data Analysis Tools

    SciTech Connect

    Barrand, Guy

    2002-08-20

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualization), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organization, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimizing re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. This paper gives an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  2. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots

    NASA Astrophysics Data System (ADS)

    Pereira, Jorge Costa; Azevedo, Julio Cesar R.; Knapik, Heloise G.; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  3. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots.

    PubMed

    Pereira, Jorge Costa; Azevedo, Julio Cesar R; Knapik, Heloise G; Burrows, Hugh Douglas

    2016-08-05

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  4. Spatial independent component analysis of functional brain optical imaging

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Pengcheng; Liu, Yadong; Luo, Weihua; Hu, Dewen; Luo, Qingming

    2003-12-01

    This paper introduces the algorithm and the basic theory of Independent Component Analysis (ICA), and discusses how to choose the proper ICA model of the data by the characteristics of the underlying signals to be estimated. The Spatial ICA (SICA) is applied to model and analysis of the data in the experiment when the signals and noises are spatially dependent. The data acquired from the intrinsic optical signals which are caused by electricity stimulation to sciatic nerve of rat are analyzed by SICA. In the result, the active-related component of the signals and its time course can be separate, and the signals of heartbeat and respiration also can be separated.

  5. An online incremental orthogonal component analysis method for dimensionality reduction.

    PubMed

    Zhu, Tao; Xu, Ye; Shen, Furao; Zhao, Jinxi

    2017-01-01

    In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective.

  6. Principal Component Analysis for Enhancement of Infrared Spectra Monitoring

    NASA Astrophysics Data System (ADS)

    Haney, Ricky Lance

    The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air

  7. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  8. Lung nodules detection in chest radiography: image components analysis

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Mou, Xuanqin; Yang, Ying; Yan, Hao

    2009-02-01

    We aimed to evaluate the effect of different components of chest image on performances of both human observer and channelized Fisher-Hotelling model (CFH) in nodule detection task. Irrelevant and relevant components were separated from clinical chest radiography by employing Principal Component Analysis (PCA) methods. Human observer performance was evaluated in two-alternative forced-choice (2AFC) on original clinical images and anatomical structure only images obtained by PCA methods. Channelized Fisher-Hotelling model with Laguerre-Gauss basis function was evaluated to predict human performance. We show that relevant component is the primary factor influencing on nodule detection in chest radiography. There is obvious difference of detectability between human observer and CFH model for nodule detection in images only containing anatomical structure. CFH model should be used more carefully.

  9. Reliability and Creep/Fatigue Analysis of a CMC Component

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.

    2007-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.

  10. Principal Component Analysis of Computed Emission Lines from Protostellar Jets

    NASA Astrophysics Data System (ADS)

    Cerqueira, A. H.; Reyes-Iturbide, J.; De Colle, F.; Vasconcelos, M. J.

    2015-08-01

    A very important issue concerning protostellar jets is the mechanism behind their formation. Obtaining information on the region at the base of a jet can shed light on the subject, and some years ago this was done through a search for a rotational signature in the jet line spectrum. The existence of such signatures, however, remains controversial. In order to contribute to the clarification of this issue, in this paper we show that principal component analysis (PCA) can potentially help to distinguish between rotation and precession effects in protostellar jet images. This method reduces the dimensions of the data, facilitating the efficient extraction of information from large data sets such as those arising from integral field spectroscopy. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates, the eigenvectors, ordered by principal components of decreasing variance. The projection of the data on these coordinates produces images called tomograms, while eigenvectors can be displayed as eigenspectra. The combined analysis of both can allow the identification of patterns correlated to a particular physical property that would otherwise remain hidden, and can help to separate the effects of physically uncorrelated phenomena in the data. These are, for example, rotation and precession in the kinematics of a stellar jet. In order to show the potential of PCA analysis, we apply it to synthetic spectro-imaging datacubes generated as an output of numerical simulations of protostellar jets. In this way we generate a benchmark with which a PCA diagnostics of real observations can be confronted. Using the computed emission line profiles for [O i]λ6300 and [S ii]λ6716, we recover and analyze the effects of rotation and precession in tomograms generated by PCA. We show that different combinations of the eigenvectors can be used to enhance and to identify the rotation features present in the data. Our results indicate that PCA can be

  11. Context sensitivity and ambiguity in component-based systems design

    SciTech Connect

    Bespalko, S.J.; Sindt, A.

    1997-10-01

    Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation of ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.

  12. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  13. Application of independent component analysis to Fermilab Booster

    SciTech Connect

    Huang, X.B.; Lee, S.Y.; Prebys, E.; Tomlin, R.; /Indiana U. /Fermilab

    2005-01-01

    Autocorrelation is applied to analyze sets of finite-sampling data such as the turn-by-turn beam position monitor (BPM) data in an accelerator. This method of data analysis, called the independent component analysis (ICA), is shown to be a powerful beam diagnosis tool for being able to decompose sampled signals into its underlying source signals. They find that the ICA has an advantage over the principle component analysis (PCA) used in the model-independent analysis (MIA) in isolating independent modes. The tolerance of the ICA method to noise in the BPM system is systematically studied. The ICA is applied to analyze the complicated beam motion in a rapid-cycling booster synchrotron at the Fermilab. Difficulties and limitations of the ICA method are also discussed.

  14. Component retention in principal component analysis with application to cDNA microarray data

    PubMed Central

    Cangelosi, Richard; Goriely, Alain

    2007-01-01

    Shannon entropy is used to provide an estimate of the number of interpretable components in a principal component analysis. In addition, several ad hoc stopping rules for dimension determination are reviewed and a modification of the broken stick model is presented. The modification incorporates a test for the presence of an "effective degeneracy" among the subspaces spanned by the eigenvectors of the correlation matrix of the data set then allocates the total variance among subspaces. A summary of the performance of the methods applied to both published microarray data sets and to simulated data is given. This article was reviewed by Orly Alter, John Spouge (nominated by Eugene Koonin), David Horn and Roy Varshavsky (both nominated by O. Alter). PMID:17229320

  15. [Component analysis on polysaccharides in exocarp of Ginkgo biloba].

    PubMed

    Song, G; Xu, A; Chen, H; Wang, X

    1997-09-01

    This paper reports the content and component analysis on polysaccharides in exocarp of Ginkgo biloba. The results show that the content of total saccharides is 89.7%; content of polysaccharides is 84.6%; content of reductic saccharides is 5.1%; the polysaccharides are composed of glucose, fructose, galactose and rhamnose.

  16. Automated pattern-guided principal component analysis vs expert-based immunophenotypic classification of B-cell chronic lymphoproliferative disorders: a step forward in the standardization of clinical immunophenotyping

    PubMed Central

    Costa, E S; Pedreira, C E; Barrena, S; Lecrevisse, Q; Flores, J; Quijano, S; Almeida, J; del Carmen García- Macias, M; Bottcher, S; Van Dongen, J J M; Orfao, A

    2010-01-01

    Immunophenotypic characterization of B-cell chronic lymphoproliferative disorders (B-CLPD) is becoming increasingly complex due to usage of progressively larger panels of reagents and a high number of World Health Organization (WHO) entities. Typically, data analysis is performed separately for each stained aliquot of a sample; subsequently, an expert interprets the overall immunophenotypic profile (IP) of neoplastic B-cells and assigns it to specific diagnostic categories. We constructed a principal component analysis (PCA)-based tool to guide immunophenotypic classification of B-CLPD. Three reference groups of immunophenotypic data files—B-cell chronic lymphocytic leukemias (B-CLL; n=10), mantle cell (MCL; n=10) and follicular lymphomas (FL; n=10)—were built. Subsequently, each of the 175 cases studied was evaluated and assigned to either one of the three reference groups or to none of them (other B-CLPD). Most cases (89%) were correctly assigned to their corresponding WHO diagnostic group with overall positive and negative predictive values of 89 and 96%, respectively. The efficiency of the PCA-based approach was particularly high among typical B-CLL, MCL and FL vs other B-CLPD cases. In summary, PCA-guided immunophenotypic classification of B-CLPD is a promising tool for standardized interpretation of tumor IP, their classification into well-defined entities and comprehensive evaluation of antibody panels. PMID:20844562

  17. Major component analysis of dynamic networks of physiologic organ interactions

    NASA Astrophysics Data System (ADS)

    Liu, Kang K. L.; Bartsch, Ronny P.; Ma, Qianli D. Y.; Ivanov, Plamen Ch

    2015-09-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function.

  18. A robust polynomial principal component analysis for seismic noise attenuation

    NASA Astrophysics Data System (ADS)

    Wang, Yuchen; Lu, Wenkai; Wang, Benfeng; Liu, Lei

    2016-12-01

    Random and coherent noise attenuation is a significant aspect of seismic data processing, especially for pre-stack seismic data flattened by normal moveout correction or migration. Signal extraction is widely used for pre-stack seismic noise attenuation. Principle component analysis (PCA), one of the multi-channel filters, is a common tool to extract seismic signals, which can be realized by singular value decomposition (SVD). However, when applying the traditional PCA filter to seismic signal extraction, the result is unsatisfactory with some artifacts when the seismic data is contaminated by random and coherent noise. In order to directly extract the desired signal and fix those artifacts at the same time, we take into consideration the amplitude variation with offset (AVO) property and thus propose a robust polynomial PCA algorithm. In this algorithm, a polynomial constraint is used to optimize the coefficient matrix. In order to simplify this complicated problem, a series of sub-optimal problems are designed and solved iteratively. After that, the random and coherent noise can be effectively attenuated simultaneously. Applications on synthetic and real data sets note that our proposed algorithm can better suppress random and coherent noise and have a better performance on protecting the desired signals, compared with the local polynomial fitting, conventional PCA and a L1-norm based PCA method.

  19. Principal Components Analysis of Triaxial Vibration Data From Helicopter Transmissions

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Huff, Edward M.

    2001-01-01

    Research on the nature of the vibration data collected from helicopter transmissions during flight experiments has led to several crucial observations believed to be responsible for the high rates of false alarms and missed detections in aircraft vibration monitoring systems. This work focuses on one such finding, namely, the need to consider additional sources of information about system vibrations. In this light, helicopter transmission vibration data, collected using triaxial accelerometers, were explored in three different directions, analyzed for content, and then combined using Principal Components Analysis (PCA) to analyze changes in directionality. In this paper, the PCA transformation is applied to 176 test conditions/data sets collected from an OH58C helicopter to derive the overall experiment-wide covariance matrix and its principal eigenvectors. The experiment-wide eigenvectors. are then projected onto the individual test conditions to evaluate changes and similarities in their directionality based on the various experimental factors. The paper will present the foundations of the proposed approach, addressing the question of whether experiment-wide eigenvectors accurately model the vibration modes in individual test conditions. The results will further determine the value of using directionality and triaxial accelerometers for vibration monitoring and anomaly detection.

  20. Cosmic reionization study: principle component analysis after Planck

    SciTech Connect

    Liu, Yang; Li, Si-Yu; Li, Yong-Ping; Zhang, Xinmin; Li, Hong E-mail: hongli@ihep.ac.cn E-mail: liyp@ihep.ac.cn

    2016-02-01

    The study of reionization history plays an important role in understanding the evolution of our universe. It is commonly believed that the intergalactic medium (IGM) in our universe are fully ionized today, however the reionizing process remains to be mysterious. A simple instantaneous reionization process is usually adopted in modern cosmology without direct observational evidence. However, the history of ionization fraction, x{sub e}(z) will influence CMB observables and constraints on optical depth τ. With the mocked future data sets based on featured reionization model, we find the bias on τ introduced by instantaneous model can not be neglected. In this paper, we study the cosmic reionization history in a model independent way, the so called principle component analysis (PCA) method, and reconstruct x{sub e} (z) at different redshift z with the data sets of Planck, WMAP 9 years temperature and polarization power spectra, combining with the baryon acoustic oscillation (BAO) from galaxy survey and type Ia supernovae (SN) Union 2.1 sample respectively. The results show that reconstructed x{sub e}(z) is consistent with instantaneous behavior, however, there exists slight deviation from this behavior at some epoch. With PCA method, after abandoning the noisy modes, we get stronger constraints, and the hints for featured x{sub e}(z) evolution could become a little more obvious.

  1. Comparison of analytical eddy current models using principal components analysis

    NASA Astrophysics Data System (ADS)

    Contant, S.; Luloff, M.; Morelli, J.; Krause, T. W.

    2017-02-01

    Monitoring the gap between the pressure tube (PT) and the calandria tube (CT) in CANDU® fuel channels is essential, as contact between the two tubes can lead to delayed hydride cracking of the pressure tube. Multifrequency transmit-receive eddy current non-destructive evaluation is used to determine this gap, as this method has different depths of penetration and variable sensitivity to noise, unlike single frequency eddy current non-destructive evaluation. An Analytical model based on the Dodd and Deeds solutions, and a second model that accounts for normal and lossy self-inductances, and a non-coaxial pickup coil, are examined for representing the response of an eddy current transmit-receive probe when considering factors that affect the gap response, such as pressure tube wall thickness and pressure tube resistivity. The multifrequency model data was analyzed using principal components analysis (PCA), a statistical method used to reduce the data set into a data set of fewer variables. The results of the PCA of the analytical models were then compared to PCA performed on a previously obtained experimental data set. The models gave similar results under variable PT wall thickness conditions, but the non-coaxial coil model, which accounts for self-inductive losses, performed significantly better than the Dodd and Deeds model under variable resistivity conditions.

  2. Model-Based Tomographic Reconstruction of Objects Containing Known Components

    PubMed Central

    Stayman, J. Webster; Otake, Yoshito; Prince, Jerry L.; Khanna, A. Jay; Siewerdsen, Jeffrey H.

    2015-01-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery. PMID:22614574

  3. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  4. Guide for Hydrogen Hazards Analysis on Components and Systems

    NASA Technical Reports Server (NTRS)

    Beeson, Harold; Woods, Stephen

    2003-01-01

    The physical and combustion properties of hydrogen give rise to hazards that must be considered when designing and operating a hydrogen system. One of the major concerns in the use of hydrogen is that of fire or detonation because of hydrogen's wide flammability range, low ignition energy, and flame speed. Other concerns include the contact and interaction of hydrogen with materials, such as the hydrogen embrittlement of materials and the formation of hydrogen hydrides. The low temperature of liquid and slush hydrogen bring other concerns related to material compatibility and pressure control; this is especially important when dissimilar, adjoining materials are involved. The potential hazards arising from these properties and design features necessitate a proper hydrogen hazards analysis before introducing a material, component, or system into hydrogen service. The objective of this guide is to describe the NASA Johnson Space Center White Sands Test Facility hydrogen hazards analysis method that should be performed before hydrogen is used in components and/or systems. The method is consistent with standard practices for analyzing hazards. It is recommended that this analysis be made before implementing a hydrogen component qualification procedure. A hydrogen hazards analysis is a useful tool for hydrogen-system designers, system and safety engineers, and facility managers. A hydrogen hazards analysis can identify problem areas before hydrogen is introduced into a system-preventing damage to hardware, delay or loss of mission or objective, and possible injury or loss of life.

  5. Model-based reconstruction of objects with inexactly known components

    NASA Astrophysics Data System (ADS)

    Stayman, J. W.; Otake, Y.; Schafer, S.; Khanna, A. J.; Prince, J. L.; Siewerdsen, J. H.

    2012-03-01

    Because tomographic reconstructions are ill-conditioned, algorithms that incorporate additional knowledge about the imaging volume generally have improved image quality. This is particularly true when measurements are noisy or have missing data. This paper presents a general framework for inclusion of the attenuation contributions of specific component objects known to be in the field-of-view as part of the reconstruction. Components such as surgical devices and tools may be modeled explicitly as being part of the attenuating volume but are inexactly known with respect to their locations poses, and possible deformations. The proposed reconstruction framework, referred to as Known-Component Reconstruction (KCR), is based on this novel parameterization of the object, a likelihood-based objective function, and alternating optimizations between registration and image parameters to jointly estimate the both the underlying attenuation and unknown registrations. A deformable KCR (dKCR) approach is introduced that adopts a control pointbased warping operator to accommodate shape mismatches between the component model and the physical component, thereby allowing for a more general class of inexactly known components. The KCR and dKCR approaches are applied to low-dose cone-beam CT data with spine fixation hardware present in the imaging volume. Such data is particularly challenging due to photon starvation effects in projection data behind the metallic components. The proposed algorithms are compared with traditional filtered-backprojection and penalized-likelihood reconstructions and found to provide substantially improved image quality. Whereas traditional approaches exhibit significant artifacts that complicate detection of breaches or fractures near metal, the KCR framework tends to provide good visualization of anatomy right up to the boundary of surgical devices.

  6. Principal components analysis of Mars in the near-infrared

    NASA Astrophysics Data System (ADS)

    Klassen, David R.

    2009-11-01

    Principal components analysis and target transformation are applied to near-infrared image cubes of Mars in a study to disentangle the spectra into a small number of spectral endmembers and characterize the spectral information. The image cubes are ground-based telescopic data from the NASA Infrared Telescope Facility during the 1995 and 1999 near-aphelion oppositions when ice clouds were plentiful [ Clancy, R.T., Grossman, A.W., Wolff, M.J., James, P.B., Rudy, D.J., Billawala, Y.N., Sandor, B.J., Lee, S.W., Muhleman, D.O., 1996. Icarus 122, 36-62; Wolff, M.J., Clancy, R.T., Whitney, B.A., Christensen, P.R., Pearl, J.C., 1999b. In: The Fifth International Conference on Mars, July 19-24, 1999, Pasadena, CA, pp. 6173], and the 2003 near-perihelion opposition when ice clouds are generally limited to topographically high regions (volcano cap clouds) but airborne dust is more common [ Martin, L.J., Zurek, R.W., 1993. J. Geophys. Res. 98 (E2), 3221-3246]. The heart of the technique is to transform the data into a vector space along the dimensions of greatest spectral variance and then choose endmembers based on these new "trait" dimensions. This is done through a target transformation technique, comparing linear combinations of the principal components to a mineral spectral library. In general Mars can be modeled, on the whole, with only three spectral endmembers which account for almost 99% of the data variance. This is similar to results in the thermal infrared with Mars Global Surveyor Thermal Emission Spectrometer data [Bandfield, J.L., Hamilton, V.E., Christensen, P.R., 2000. Science 287, 1626-1630]. The globally recovered surface endmembers can be used as inputs to radiative transfer modeling in order to measure ice abundance in martian clouds [Klassen, D.R., Bell III, J.F., 2002. Bull. Am. Astron. Soc. 34, 865] and a preliminary test of this technique is also presented.

  7. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  8. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  9. [Identification of antler powder components based on DNA barcoding technology].

    PubMed

    Jia, Jing; Shi, Lin-chun; Xu, Zhi-chao; Xin, Tian-yi; Song, Jing-yuan; Chen Shi, Lin

    2015-10-01

    In order to authenticate the components of antler powder in the market, DNA barcoding technology coupled with cloning method were used. Cytochrome c oxidase subunit I (COI) sequences were obtained according to the DNA barcoding standard operation procedure (SOP). For antler powder with possible mixed components, the cloning method was used to get each COI sequence. 65 COI sequences were successfully obtained from commercial antler powders via sequencing PCR products. The results indicates that only 38% of these samples were derived from Cervus nippon Temminck or Cervus elaphus Linnaeus which is recorded in the 2010 edition of "Chinese Pharmacopoeia", while 62% of them were derived from other species. Rangifer tarandus Linnaeus was the most frequent species among the adulterants. Further analysis showed that some samples collected from different regions, companies and prices, contained adulterants. Analysis of 36 COI sequences obtained by the cloning method showed that C. elaphus and C. nippon were main components. In addition, some samples were marked clearly as antler powder on the label, however, C. elaphus or R. tarandus were their main components. In summary, DNA barcoding can accurately and efficiently distinguish the exact content in the commercial antler powder, which provides a new technique to ensure clinical safety and improve quality control of Chinese traditional medicine

  10. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  11. Principal components analysis of a JWST NIRSpec detector subsystem

    NASA Astrophysics Data System (ADS)

    Rauscher, Bernard J.; Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos

    2013-09-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope Near Infrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T ~ 40 K SIDECAR ASICs and 5 μm cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the system's response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the system's principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use PCA to determine a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements.

  12. Reprint of: Enantiomeric separation of functionalized ethano-bridged Tröger bases using macrocyclic cyclofructan and cyclodextrin chiral selectors in high-performance liquid chromatography and capillary electrophoresis with application of principal component analysis.

    PubMed

    Weatherly, Choyce A; Na, Yun-Cheol; Nanayakkara, Yasith S; Woods, Ross M; Sharma, Ankit; Lacour, Jérôme; Armstrong, Daniel W

    2014-10-01

    The enantiomeric separation of a series of racemic functionalized ethano-bridged Tröger base compounds was examined by high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). Using HPLC and CE the entire set of 14 derivatives was separated by chiral stationary phases (CSPs) and chiral additives composed of cyclodextrin (native and derivatized) and cyclofructan (derivatized). Baseline separations (Rs ≥ 1.5) in HPLC were achieved for 13 of the 14 compounds with resolution values as high as 5.0. CE produced 2 baseline separations. The separations on the cyclodextrin CSPs showed optimum results in the reversed phase mode, and the LARIHC cyclofructan CSPs separations showed optimum results in the normal phase mode. HPLC separation data of the compounds was analyzed using principal component analysis (PCA). The PCA biplot analysis showed that retention is governed by the size of the R1 substituent in the case of derivatized cyclofructan and cyclodextrin CSPs, and enantiomeric resolution closely correlated with the size of the R2 group in the case of non-derivatized γ-cyclodextrin CSP. It is clearly shown that chromatographic retention is necessary but not sufficient for the enantiomeric separations of these compounds.

  13. Enantiomeric separation of functionalized ethano-bridged Tröger bases using macrocyclic cyclofructan and cyclodextrin chiral selectors in high-performance liquid chromatography and capillary electrophoresis with application of principal component analysis.

    PubMed

    Weatherly, Choyce A; Na, Yun-Cheol; Nanayakkara, Yasith S; Woods, Ross M; Sharma, Ankit; Lacour, Jérôme; Armstrong, Daniel W

    2014-04-01

    The enantiomeric separation of a series of racemic functionalized ethano-bridged Tröger base compounds was examined by high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). Using HPLC and CE the entire set of 14 derivatives was separated by chiral stationary phases (CSPs) and chiral additives composed of cyclodextrin (native and derivatized) and cyclofructan (derivatized). Baseline separations (Rs≥1.5) in HPLC were achieved for 13 of the 14 compounds with resolution values as high as 5.0. CE produced 2 baseline separations. The separations on the cyclodextrin CSPs showed optimum results in the reversed phase mode, and the LARIHC™ cyclofructan CSPs separations showed optimum results in the normal phase mode. HPLC separation data of the compounds was analyzed using principal component analysis (PCA). The PCA biplot analysis showed that retention is governed by the size of the R1 substituent in the case of derivatized cyclofructan and cyclodextrin CSPs, and enantiomeric resolution closely correlated with the size of the R2 group in the case of non-derivatized γ-cyclodextrin CSP. It is clearly shown that chromatographic retention is necessary but not sufficient for the enantiomeric separations of these compounds.

  14. Principal Component Analysis of Terrestrial and Venusian Topography

    NASA Astrophysics Data System (ADS)

    Stoddard, P. R.; Jurdy, D. M.

    2015-12-01

    We use Principal Component Analysis (PCA) as an objective tool in analyzing, comparing, and contrasting topographic profiles of different/similar features from different locations and planets. To do so, we take average profiles of a set of features and form a cross-correlation matrix, which is then diagonalized to determine its principal components. These components, not merely numbers, represent actual profile shapes that give a quantitative basis for comparing different sets of features. For example, PCA for terrestrial hotspots shows the main component as a generic dome shape. Secondary components show a more sinusoidal shape, related to the lithospheric loading response, and thus give information about the nature of the lithosphere setting of the various hotspots. We examine a range of terrestrial spreading centers: fast, slow, ultra-slow, incipient, and extinct, and compare these to several chasmata on Venus (including Devana, Ganis, Juno, Parga, and Kuanja). For upwelling regions, we consider the oceanic Hawaii, Reunion, and Iceland hotspots and Yellowstone, a prototypical continental hotspot. Venus has approximately one dozen broad topographic and geoid highs called regiones. Our analysis includes Atla, Beta, and W. Eistla regiones. Atla and Beta are widely thought to be the most likely to be currently or recently active. Analysis of terrestrial rifts suggests shows increasing uniformity of shape among rifts with increasing spreading rates. Venus' correlations of uniformity rank considerably lower than the terrestrial ones. Extrapolating the correlation/spreading rate suggests that Venus' chasmata, if analogous to terrestrial spreading centers, most resemble the ultra-slow spreading level (less than 12mm/yr) of the Arctic Gakkel ridge. PCA will provide an objective measurement of this correlation.

  15. Estimation of Baroreflex Function Using Independent Component Analysis of Photoplethysmography

    NASA Astrophysics Data System (ADS)

    Abe, Makoto; Yoshizawa, Makoto; Sugita, Norihiro; Tanaka, Akira; Homma, Noriyasu; Yambe, Tomoyuki; Nitta, Shin-Ichi

    The maximum cross-correlation coefficient ρmax between blood pressure variability and heart rate variability, whose frequency components are limited to the Mayer wave-related band, is a useful index to evaluate the state of the autonomic nervous function related to baroreflex. However, measurement of continuous blood pressure with an expensive and bulky measuring device is required to calculate ρmax. The present study has proposed an easier method for obtaining ρmax with measurement of finger photoplethysmography (PPG) only. In the proposed method, independent components are extracted from feature variables specifying the PPG signal by using the independent component analysis (ICA), and then the most appropriate component is chosen out of them so that the ρmax calculated from the component can approximate its true value. The results from the experiment with a postural change performed in 18 healthy subjects have suggested that the proposed method is available for estimating ρmax by using the ICA to extract blood pressure information from the PPG signal.

  16. Common and Cluster-Specific Simultaneous Component Analysis

    PubMed Central

    De Roover, Kim; Timmerman, Marieke E.; Mesquita, Batja; Ceulemans, Eva

    2013-01-01

    In many fields of research, so-called ‘multiblock’ data are collected, i.e., data containing multivariate observations that are nested within higher-level research units (e.g., inhabitants of different countries). Each higher-level unit (e.g., country) then corresponds to a ‘data block’. For such data, it may be interesting to investigate the extent to which the correlation structure of the variables differs between the data blocks. More specifically, when capturing the correlation structure by means of component analysis, one may want to explore which components are common across all data blocks and which components differ across the data blocks. This paper presents a common and cluster-specific simultaneous component method which clusters the data blocks according to their correlation structure and allows for common and cluster-specific components. Model estimation and model selection procedures are described and simulation results validate their performance. Also, the method is applied to data from cross-cultural values research to illustrate its empirical value. PMID:23667463

  17. Principal Component Analysis of Arctic Solar Irradiance Spectra

    NASA Technical Reports Server (NTRS)

    Rabbette, Maura; Pilewskie, Peter; Gore, Warren J. (Technical Monitor)

    2000-01-01

    During the FIRE (First ISCPP Regional Experiment) Arctic Cloud Experiment and coincident SHEBA (Surface Heat Budget of the Arctic Ocean) campaign, detailed moderate resolution solar spectral measurements were made to study the radiative energy budget of the coupled Arctic Ocean - Atmosphere system. The NASA Ames Solar Spectral Flux Radiometers (SSFRs) were deployed on the NASA ER-2 and at the SHEBA ice camp. Using the SSFRs we acquired continuous solar spectral irradiance (380-2200 nm) throughout the atmospheric column. Principal Component Analysis (PCA) was used to characterize the several tens of thousands of retrieved SSFR spectra and to determine the number of independent pieces of information that exist in the visible to near-infrared solar irradiance spectra. It was found in both the upwelling and downwelling cases that almost 100% of the spectral information (irradiance retrieved from 1820 wavelength channels) was contained in the first six extracted principal components. The majority of the variability in the Arctic downwelling solar irradiance spectra was explained by a few fundamental components including infrared absorption, scattering, water vapor and ozone. PCA analysis of the SSFR upwelling Arctic irradiance spectra successfully separated surface ice and snow reflection from overlying cloud into distinct components.

  18. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  19. Incorporating finite element analysis into component life and reliability

    NASA Technical Reports Server (NTRS)

    August, Richard; Zaretsky, Erwin V.

    1991-01-01

    A method for calculating a component's design survivability by incorporating finite element analysis and probabilistic material properties was developed. The method evaluates design parameters through direct comparisons of component survivability expressed in terms of Weibull parameters. The analysis was applied to a rotating disk with mounting bolt holes. The highest probability of failure occurred at, or near, the maximum shear stress region of the bolt holes. Distribution of failure as a function of Weibull slope affects the probability of survival. Where Weibull parameters are unknown for a rotating disk, it may be permissible to assume Weibull parameters, as well as the stress-life exponent, in order to determine the disk speed where the probability of survival is highest.

  20. Real-time feature extraction of P300 component using adaptive nonlinear principal component analysis

    PubMed Central

    2011-01-01

    Background The electroencephalography (EEG) signals are known to involve the firings of neurons in the brain. The P300 wave is a high potential caused by an event-related stimulus. The detection of P300s included in the measured EEG signals is widely investigated. The difficulties in detecting them are that they are mixed with other signals generated over a large brain area and their amplitudes are very small due to the distance and resistivity differences in their transmittance. Methods A novel real-time feature extraction method for detecting P300 waves by combining an adaptive nonlinear principal component analysis (ANPCA) and a multilayer neural network is proposed. The measured EEG signals are first filtered using a sixth-order band-pass filter with cut-off frequencies of 1 Hz and 12 Hz. The proposed ANPCA scheme consists of four steps: pre-separation, whitening, separation, and estimation. In the experiment, four different inter-stimulus intervals (ISIs) are utilized: 325 ms, 350 ms, 375 ms, and 400 ms. Results The developed multi-stage principal component analysis method applied at the pre-separation step has reduced the external noises and artifacts significantly. The introduced adaptive law in the whitening step has made the subsequent algorithm in the separation step to converge fast. The separation performance index has varied from -20 dB to -33 dB due to randomness of source signals. The robustness of the ANPCA against background noises has been evaluated by comparing the separation performance indices of the ANPCA with four algorithms (NPCA, NSS-JD, JADE, and SOBI), in which the ANPCA algorithm demonstrated the shortest iteration time with performance index about 0.03. Upon this, it is asserted that the ANPCA algorithm successfully separates mixed source signals. Conclusions The independent components produced from the observed data using the proposed method illustrated that the extracted signals were clearly the P300 components elicited by task

  1. Three-dimensional inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Todd, E. S.

    1987-01-01

    The objective of this program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional inelastic structural analysis of combustor liners, turbine blades, and turbine vanes. Each code embodies a progression of mathematical models for increasingly comprehensive representation of the geometrical features, loading conditions, and forms of nonlinear material response that distinguish these three groups of hot section components.

  2. On 3-D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Todd, E. S.

    1986-01-01

    The objective of this program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional inelastic structural analysis of combustor liners, turbine blades, and turbine vanes. Each code embodies a progression of mathematical models for increasingly comprehensive representation of the geometrical features, loading conditions, and forms of nonlinear material response that distinguish these three groups of hot section components.

  3. Deuterium incorporation in biomass cell wall components by NMR analysis

    SciTech Connect

    Foston, Marcus B; McGaughey, Joseph; O'Neill, Hugh Michael; Evans, Barbara R; Ragauskas, Arthur J

    2012-01-01

    A commercially available deuterated kale sample was analyzed for deuterium incorporation by ionic liquid solution 2H and 1H nuclear magnetic resonance (NMR). This protocol was found to effectively measure the percent deuterium incorporation at 33%, comparable to the 31% value determined by combustion. The solution NMR technique also suggested by a qualitative analysis that deuterium is preferentially incorporated into the carbohydrate components of the kale sample.

  4. Independet Component Analyses of Ground-based Exoplanetary Transits

    NASA Astrophysics Data System (ADS)

    Silva Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Waldmann, Ingo; Biddle, Lauren; Zellem, Robert Thomas; Alvarez-Candal, Alvaro

    2016-10-01

    Most observations of exoplanetary atmospheres are conducted when a "Hot Jupiter" exoplanet transits in front of its host star. These Jovian-sized planets have small orbital periods, on the order of days, and therefore a short transit time, making them more ameanable to observations. Measurements of Hot Jupiter transits must achieve a 10-4 level of accuracy in the flux to determine the spectral modulations of the exoplanetary atmosphere. In order to accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth's atmosphere, from the signal due to the exoplanet, which is several orders of magnitudes smaller. Currently, the effects of the terrestrial atmosphere and the some of the time-dependent systematic errors are treated by dividing the host star by a reference star at each wavelength and time step of the transit. More recently, Independent Component Analyses (ICA) have been used to remove systematic effects from the raw data of space-based observations (Waldmann 2014,2012; Morello et al.,2015,2016). ICA is a statistical method born from the ideas of the blind-source separation studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). One strength of this method is that it requires no additional prior knowledge of the system. Here, we present a study of the application of ICA to ground-based transit observations of extrasolar planets, which are affected by Earth's atmosphere. We analyze photometric data of two extrasolar planets, WASP-1b and GJ3470b, recorded by the 61" Kuiper Telescope at Stewart Observatory using the Harris B and U filters. The presentation will compare the light curve depths and their dispersions as derived from the ICA analysis to those derived by analyses that ratio of the host star to nearby reference stars.References: Waldmann, I.P. 2012 ApJ, 747, 12, Waldamann, I. P. 2014 ApJ, 780, 23; Morello G. 2015 ApJ, 806

  5. Guidelines for Design and Analysis of Large, Brittle Spacecraft Components

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1993-01-01

    There were two related parts to this work. The first, conducted at The Aerospace Corporation was to develop and define methods for integrating the statistical theory of brittle strength with conventional finite element stress analysis, and to carry out a limited laboratory test program to illustrate the methods. The second part, separately funded at Aerojet Electronic Systems Division, was to create the finite element postprocessing program for integrating the statistical strength analysis with the structural analysis. The second part was monitored by Capt. Jeff McCann of USAF/SMC, as Special Study No.11, which authorized Aerojet to support Aerospace on this work requested by NASA. This second part is documented in Appendix A. The activity at Aerojet was guided by the Aerospace methods developed in the first part of this work. This joint work of Aerospace and Aerojet stemmed from prior related work for the Defense Support Program (DSP) Program Office, to qualify the DSP sensor main mirror and corrector lens for flight as part of a shuttle payload. These large brittle components of the DSP sensor are provided by Aerojet. This document defines rational methods for addressing the structural integrity and safety of large, brittle, payload components, which have low and variable tensile strength and can suddenly break or shatter. The methods are applicable to the evaluation and validation of such components, which, because of size and configuration restrictions, cannot be validated by direct proof test.

  6. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  7. Polarized BRDF for coatings based on three-component assumption

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  8. Spectral Synthesis via Mean Field approach to Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ning; Su, Shan-Shan; Kong, Xu

    2016-03-01

    We apply a new statistical analysis technique, the Mean Field approach to Independent Component Analysis (MF-ICA) in a Bayseian framework, to galaxy spectral analysis. This algorithm can compress a stellar spectral library into a few Independent Components (ICs), and the galaxy spectrum can be reconstructed by these ICs. Compared to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, the MF-ICA approach offers a large improvement in efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter recovery for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters derived with galaxies from the Sloan Digital Sky Survey. We find that our MF-ICA method can not only fit the observed galaxy spectra efficiently, but can also accurately recover the physical parameters of galaxies. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find it can provide excellent fitting results for low signal-to-noise spectra.

  9. A fast minimum variance beamforming method using principal component analysis.

    PubMed

    Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo

    2014-06-01

    Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.

  10. Representation for dialect recognition using topographic independent component analysis

    NASA Astrophysics Data System (ADS)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  11. Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis.

    PubMed

    Cohnstaedt, Lee W; Rochon, Kateryn; Duehl, Adrian J; Anderson, John F; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C; Obenauer, Peter J; Campbell, James F; Lysyk, Tim J; Allan, Sandra A

    2012-03-01

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium "Advancements in arthropod monitoring technology, techniques, and analysis" presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles.

  12. Early Effect of Amyloid β-Peptide on Hippocampal and Serum Metabolism in Rats Studied by an Integrated Method of NMR-Based Metabolomics and ANOVA-Simultaneous Component Analysis.

    PubMed

    Du, Yao; Zheng, Hong; Xia, Huanhuan; Zhao, Liangcai; Hu, Wenyi; Bai, Guanghui; Yan, Zhihan; Gao, Hongchang

    2017-01-01

    Amyloid β (Aβ) deposition has been implicated in the pathogenesis of Alzheimer's disease. However, the early effect of Aβ deposition on metabolism remains unclear. In the present study, thus, we explored the metabolic changes in the hippocampus and serum during first 2 weeks of Aβ25-35 injection in rats by using an integrated method of NMR-based metabolomics and ANOVA-simultaneous component analysis (ASCA). Our results show that Aβ25-35 injection, time, and their interaction had statistically significant effects on the hippocampus and serum metabolome. Furthermore, we identified key metabolites that mainly contributed to these effects. After Aβ25-35 injection from 1 to 2 weeks, the levels of lactate, N-acetylaspartate, creatine, and taurine were decreased in rat hippocampus, while an increase in lactate and decreases in LDL/VLDL and glucose were observed in rat serum. Therefore, we suggest that the reduction in energy and lipid metabolism as well as an increase in anaerobic glycolysis may occur at the early stage of Aβ25-35 deposition.

  13. Early Effect of Amyloid β-Peptide on Hippocampal and Serum Metabolism in Rats Studied by an Integrated Method of NMR-Based Metabolomics and ANOVA-Simultaneous Component Analysis

    PubMed Central

    Du, Yao; Xia, Huanhuan; Zhao, Liangcai; Hu, Wenyi; Bai, Guanghui

    2017-01-01

    Amyloid β (Aβ) deposition has been implicated in the pathogenesis of Alzheimer's disease. However, the early effect of Aβ deposition on metabolism remains unclear. In the present study, thus, we explored the metabolic changes in the hippocampus and serum during first 2 weeks of Aβ25–35 injection in rats by using an integrated method of NMR-based metabolomics and ANOVA-simultaneous component analysis (ASCA). Our results show that Aβ25–35 injection, time, and their interaction had statistically significant effects on the hippocampus and serum metabolome. Furthermore, we identified key metabolites that mainly contributed to these effects. After Aβ25–35 injection from 1 to 2 weeks, the levels of lactate, N-acetylaspartate, creatine, and taurine were decreased in rat hippocampus, while an increase in lactate and decreases in LDL/VLDL and glucose were observed in rat serum. Therefore, we suggest that the reduction in energy and lipid metabolism as well as an increase in anaerobic glycolysis may occur at the early stage of Aβ25–35 deposition. PMID:28243597

  14. Prediction of p38 map kinase inhibitory activity of 3, 4-dihydropyrido [3, 2-d] pyrimidone derivatives using an expert system based on principal component analysis and least square support vector machine

    PubMed Central

    Shahlaei, M.; Saghaie, L.

    2014-01-01

    A quantitative structure–activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies. PMID:26339262

  15. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  16. Polycyclic Aromatic Aerosol Components: Chemical Analysis and Reactivity

    NASA Astrophysics Data System (ADS)

    Schauer, C.; Niessner, R.; Pöschl, U.

    Polycyclic aromatic hydrocarbons (PAHs) are ubiquitous environmental pollutants in the atmosphere and originate primarily from incomplete combustion of organic matter and fossil fuels. Their main sources are anthropogenic (e.g. vehicle emissions, domes- tic heating or tobacco smoke), and PAHs consisting of more than four fused aromatic rings reside mostly on combustion aerosol particles, where they can react with atmo- spheric trace gases like O3, NOx or OH radicals leading to a wide variety of partially oxidized and nitrated derivatives. Such chemical transformations can strongly affect the activity of the aerosol particles as condensation nuclei, their atmospheric residence times, and consequently their direct and indirect climatic effects. Moreover some poly- cyclic aromatic compounds (PACs = PAHs + derivatives) are known to have a high carcinogenic, mutagenic and allergenic potential, and are thus of major importance in air pollution control. Furthermore PACs can be used as well defined soot model sub- stances, since the basic structure of soot can be regarded as an agglomerate of highly polymerized PAC-layers. For the chemical analysis of polycyclic aromatic aerosol components a new analyti- cal method based on LC-APCI-MS has been developed, and a data base comprising PAHs, Oxy-PAHs and Nitro-PAHs has been established. Together with a GC-HRMS method it will be applied to identify and quantify PAHs and Nitro-PAHs in atmo- spheric aerosol samples, diesel exhaust particle samples and model soot samples from laboratory reaction kinetics and product studies. As reported before, the adsorption and surface reaction rate of ozone on soot and PAH-like particle surfaces is reduced by competitive adsorption of water vapor at low relative humidity (< 25 %). Recent results at higher relative humidities (ca. 50 %), however, indicate re-enhanced gas phase ozone loss, which may be due to absorbtion of ozone into an aqueous surface layer. The interaction of ozone and nitrogen

  17. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  18. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  19. Cognitive components underpinning the development of model-based learning.

    PubMed

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2016-10-29

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning.

  20. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    SciTech Connect

    Whalen, Michael R.; /Stevens Tech. /SLAC

    2011-06-22

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/{sup o}C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  1. Gas chromatography/mass spectrometry based component profiling and quality prediction for Japanese sake.

    PubMed

    Mimura, Natsuki; Isogai, Atsuko; Iwashita, Kazuhiro; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-10-01

    Sake is a Japanese traditional alcoholic beverage, which is produced by simultaneous saccharification and alcohol fermentation of polished and steamed rice by Aspergillus oryzae and Saccharomyces cerevisiae. About 300 compounds have been identified in sake, and the contribution of individual components to the sake flavor has been examined at the same time. However, only a few compounds could explain the characteristics alone and most of the attributes still remain unclear. The purpose of this study was to examine the relationship between the component profile and the attributes of sake. Gas chromatography coupled with mass spectrometry (GC/MS)-based non-targeted analysis was employed to obtain the low molecular weight component profile of Japanese sake including both nonvolatile and volatile compounds. Sake attributes and overall quality were assessed by analytical descriptive sensory test and the prediction model of the sensory score from the component profile was constructed by means of orthogonal projections to latent structures (OPLS) regression analysis. Our results showed that 12 sake attributes [ginjo-ka (aroma of premium ginjo sake), grassy/aldehydic odor, sweet aroma/caramel/burnt odor, sulfury odor, sour taste, umami, bitter taste, body, amakara (dryness), aftertaste, pungent/smoothness and appearance] and overall quality were accurately explained by component profiles. In addition, we were able to select statistically significant components according to variable importance on projection (VIP). Our methodology clarified the correlation between sake attribute and 200 low molecular components and presented the importance of each component thus, providing new insights to the flavor study of sake.

  2. Biochemical component identification by plasmonic improved whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-05-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins, microelements, antibiotic of different generation etc. in both single and multi component solutions under varied in wide range concentration analyzed on the light scattering parameters of whispering gallery mode optical resonance based sensor are represented. Multiplexing on parameters and components has been realized using developed fluidic sensor cell with fixed in adhesive layer dielectric microspheres and data processing. Biochemical component identification has been performed by developed network analysis techniques. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis. Novel technique based on optical resonance on microring structures, plasmon resonance and identification tools has been developed. To improve a sensitivity of microring structures microspheres fixed by adhesive had been treated previously by gold nanoparticle solution. Another technique used thin film gold layers deposited on the substrate below adhesive. Both biomolecule and nanoparticle injections caused considerable changes of optical resonance spectra. Plasmonic gold layers under optimized thickness also improve parameters of optical resonance spectra. Biochemical component identification has been also performed by developed network analysis techniques both for single and for multi component solution. So advantages of plasmon enhancing optical microcavity resonance with multiparameter identification tools is used for development of a new platform for ultra sensitive label-free biomedical sensor.

  3. Principal components granulometric analysis of tidally dominated depositional environments

    SciTech Connect

    Mitchell, S.W. ); Long, W.T. ); Friedrich, N.E. )

    1991-02-01

    Sediments often are investigated by using mechanical sieve analysis (at 1/4 or 1/2{phi} intervals) to identify differences in weight-percent distributions between related samples, and thereby, to deduce variations in sediment sources and depositional processes. Similar granulometric data from groups of surface samples from two siliciclastic estuaries and one carbonate tidal creek have been clustered using principal components analysis. Subtle geographic trends in tidally dominated depositional processes and in sediment sources can be inferred from the clusters. In Barnstable Harbor, Cape Cod, Massachusetts, the estuary can be subdivided into five major subenvironments, with tidal current intensities/directions and sediment sources (longshore transport or sediments weathering from the Sandwich Moraine) as controls. In Morro Bay, San Luis Obispo county, California, all major environments (beach, dune, bay, delta, and fluvial) can be easily distinguished; a wide variety of subenvironments can be recognized. On Pigeon Creek, San Salvador Island, Bahamas, twelve subenvironments can be recognized. Biogenic (Halimeda, Peneroplios, mixed skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis of eroding Pleistocene limestone) are grain types which dominate. When combined with tidal current intensities/directions, grain sources produce subenvironments distributed parallel to tidal channels. The investigation of the three modern environments indicates that principal components granulometric analysis is potentially a useful tool in recognizing subtle changes in transport processes and sediment sources preserved in ancient depositional sequences.

  4. Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis

    PubMed Central

    Rochon, Kateryn; Duehl, Adrian J.; Anderson, John F.; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C.; Obenauer, Peter J.; Campbell, James F.; Lysyk, Tim J.; Allan, Sandra A.

    2015-01-01

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthropod monitoring technology, techniques, and analysis” presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles. PMID:26543242

  5. A component based implementation of agents and brokers for design coordination

    NASA Technical Reports Server (NTRS)

    Weidner, R. J.

    2001-01-01

    NASA's mission design coordination has been based on expert opinion of parametric data presented in Excel or Powerpoint. Common access is required to more powerful design tools supporting performance simulation and analysis. Components provide the means for inexpensively adding the desired functionality.

  6. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  7. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  8. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  9. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  10. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  11. Independent component analysis of Alzheimer's DNA microarray gene expression data

    PubMed Central

    Kong, Wei; Mou, Xiaoyang; Liu, Qingzhong; Chen, Zhongxue; Vanderburg, Charles R; Rogers, Jack T; Huang, Xudong

    2009-01-01

    Background Gene microarray technology is an effective tool to investigate the simultaneous activity of multiple cellular pathways from hundreds to thousands of genes. However, because data in the colossal amounts generated by DNA microarray technology are usually complex, noisy, high-dimensional, and often hindered by low statistical power, their exploitation is difficult. To overcome these problems, two kinds of unsupervised analysis methods for microarray data: principal component analysis (PCA) and independent component analysis (ICA) have been developed to accomplish the task. PCA projects the data into a new space spanned by the principal components that are mutually orthonormal to each other. The constraint of mutual orthogonality and second-order statistics technique within PCA algorithms, however, may not be applied to the biological systems studied. Extracting and characterizing the most informative features of the biological signals, however, require higher-order statistics. Results ICA is one of the unsupervised algorithms that can extract higher-order statistical structures from data and has been applied to DNA microarray gene expression data analysis. We performed FastICA method on DNA microarray gene expression data from Alzheimer's disease (AD) hippocampal tissue samples and consequential gene clustering. Experimental results showed that the ICA method can improve the clustering results of AD samples and identify significant genes. More than 50 significant genes with high expression levels in severe AD were extracted, representing immunity-related protein, metal-related protein, membrane protein, lipoprotein, neuropeptide, cytoskeleton protein, cellular binding protein, and ribosomal protein. Within the aforementioned categories, our method also found 37 significant genes with low expression levels. Moreover, it is worth noting that some oncogenes and phosphorylation-related proteins are expressed in low levels. In comparison to the PCA and support

  12. Analysis of Component of Aggression in the Stories of Elementary School Aggressive Children

    ERIC Educational Resources Information Center

    Chamandar, Fateme; Jabbari, D. Susan

    2017-01-01

    The purpose of this study is the content analysis of children's stories based on the components of aggression. Participants are 66 elementary school students (16 girls and 50 boys) selected from fourth and fifth grades, using the Relational and Overt Aggression Questionnaire; completed by the teachers. Draw a Story Test (Silver, 2005) is…

  13. Hurricane properties by principal component analysis of Doppler radar data

    NASA Astrophysics Data System (ADS)

    Harasti, Paul Robert

    A novel approach based on Principal Component Analysis (PCA) of Doppler radar data establishes hurricane properties such as the positions of the circulation centre and wind maxima. The method was developed in conjunction with a new Doppler radar wind model for both mature and weak hurricanes. The tangential wind (Vt) is modeled according to Vtζx = constant, where ζ is the radius, and x is an exponent. The maximum Vt occurs at the Radius of Maximum Wind (RMW). For the mature (weak) hurricane case, x = 1 ( x < 1) within the RMW, and x = 0.5 ( x = 0) beyond the RMW. The radial wind is modeled in a similar fashion in the radial direction with up to two transition radii but it is also varied linearly in the vertical direction. This is the first Doppler radar wind model to account for the vertical variations in the radial wind. The new method employs an S2-mode PCA on the Doppler velocity data taken from a single PPI scan and arranged sequentially in a matrix according to their azimuth and range coordinates. The first two eigenvectors of both the range and azimuth eigenspaces represent over 95% of the total variance in the modeled data; one eigenvector from each pair is analyzed separately to estimate particular hurricane properties. These include the bearing and range to the hurricane's circulation centre, the RMW, and the transition radii of the radial wind. Model results suggest that greater accuracy is achievable and fewer restrictions apply in comparison to other methods. The PCA method was tested on the Doppler velocity data of Hurricane Erin (1995) and Typhoon Alex (1987). In both cases, the similarity of the eigenvectors to their theoretical counterparts was striking even in the presence of significant missing data. Results from Hurricane Erin were in agreement with concurrent aircraft observations of the wind centre corrected for the storm motion. Such information was not available for Typhoon Alex, however, the results agreed with those from other methods

  14. Analysis of Fission Products on the AGR-1 Capsule Components

    SciTech Connect

    Paul A. Demkowicz; Jason M. Harp; Philip L. Winston; Scott A. Ploger

    2013-03-01

    The components of the AGR-1 irradiation capsules were analyzed to determine the retained inventory of fission products in order to determine the extent of in-pile fission product release from the fuel compacts. This includes analysis of (i) the metal capsule components, (ii) the graphite fuel holders, (iii) the graphite spacers, and (iv) the gas exit lines. The fission products most prevalent in the components were Ag-110m, Cs 134, Cs 137, Eu-154, and Sr 90, and the most common location was the metal capsule components and the graphite fuel holders. Gamma scanning of the graphite fuel holders was also performed to determine spatial distribution of Ag-110m and radiocesium. Silver was released from the fuel components in significant fractions. The total Ag-110m inventory found in the capsules ranged from 1.2×10 2 (Capsule 3) to 3.8×10 1 (Capsule 6). Ag-110m was not distributed evenly in the graphite fuel holders, but tended to concentrate at the axial ends of the graphite holders in Capsules 1 and 6 (located at the top and bottom of the test train) and near the axial center in Capsules 2, 3, and 5 (in the center of the test train). The Ag-110m further tended to be concentrated around fuel stacks 1 and 3, the two stacks facing the ATR reactor core and location of higher burnup, neutron fluence, and temperatures compared with Stack 2. Detailed correlation of silver release with fuel type and irradiation temperatures is problematic at the capsule level due to the large range of temperatures experienced by individual fuel compacts in each capsule. A comprehensive Ag 110m mass balance for the capsules was performed using measured inventories of individual compacts and the inventory on the capsule components. For most capsules, the mass balance was within 11% of the predicted inventory. The Ag-110m release from individual compacts often exhibited a very large range within a particular capsule.

  15. Nonlinear independent component analysis: Existence and uniqueness results.

    PubMed

    Hyvärinen, Aapo; Pajunen, Petteri

    1999-04-01

    The question of existence and uniqueness of solutions for nonlinear independent component analysis is addressed. It is shown that if the space of mixing functions is not limited there exists always an infinity of solutions. In particular, it is shown how to construct parameterized families of solutions. The indeterminacies involved are not trivial, as in the linear case. Next, it is shown how to utilize some results of complex analysis to obtain uniqueness of solutions. We show that for two dimensions, the solution is unique up to a rotation, if the mixing function is constrained to be a conformal mapping together with some other assumptions. We also conjecture that the solution is strictly unique except in some degenerate cases, as the indeterminacy implied by the rotation is essentially similar to estimating the model of linear ICA.

  16. The ethical component of professional competence in nursing: an analysis.

    PubMed

    Paganini, Maria Cristina; Yoshikawa Egry, Emiko

    2011-07-01

    The purpose of this article is to initiate a philosophical discussion about the ethical component of professional competence in nursing from the perspective of Brazilian nurses. Specifically, this article discusses professional competence in nursing practice in the Brazilian health context, based on two different conceptual frameworks. The first framework is derived from the idealistic and traditional approach while the second views professional competence through the lens of historical and dialectical materialism theory. The philosophical analyses show that the idealistic view of professional competence differs greatly from practice. Combining nursing professional competence with philosophical perspectives becomes a challenge when ideals are opposed by the reality and implications of everyday nursing practice.

  17. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; Moseley, S. H.; Porst, J.-P.; Porter, F. S.; Sadleir, J. E.

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  18. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; Moseley, S. H.; Porst, J.-P.; Porter, F. S.; Sadleir, J. E.; Smith, S. J.

    2016-07-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an ^{55}Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  19. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    PubMed

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  20. Analysis of Performance of Jet Engine from Characteristics of Components II : Interaction of Components as Determined from Engine Operation

    NASA Technical Reports Server (NTRS)

    Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl

    1949-01-01

    In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.

  1. Scalable Robust Principal Component Analysis using Grassmann Averages.

    PubMed

    Hauberg, Soren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael

    2015-12-23

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  2. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  3. Revisiting AVHRR tropospheric aerosol trends using principal component analysis

    NASA Astrophysics Data System (ADS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-03-01

    The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.

  4. Demixed principal component analysis of neural population data

    PubMed Central

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-01-01

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378

  5. Revisiting AVHRR Tropospheric Aerosol Trends Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-01-01

    The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.

  6. Anisoplanatic Imaging Through Turbulence Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Baena-Gallé, R.; Katsaggelos, A.; Molina, R.; Mateos, J.; Gladysz, S.

    The performance of optical systems is highly degraded by atmospheric turbulence when observing both vertically (e.g., astronomy, remote sensing) or horizontally (long-range surveillance). This problem can be partially alleviated using adaptive optics (AO) but only for small fields of view (FOV) described by the isoplanatic angle for which the turbulence-induced aberrations are considered constant. Additionally, this problem can also be tackled using post-processing techniques such as deconvolution algorithms which take into account the variability of the point spread function (PSF) in anisoplanatic conditions. Variability of the PSF across the FOV in anisoplanatc imagery can be described using principal component analysis (Karhunen-Loeve transform). Then, a certain number of variable PSFs can be used to create new basis functions, called principal components (PC), which can be considered constant across the FOV and, therefore, potentially be used to perform global deconvolution. Our aim is twofold: firstly, to describe the shape and statistics of the anisoplanatic PSF for single-conjugate AO systems with only a few parameters and, secondly, using this information to obtain the set of PSFs at positions in the FOV so that the associated variability is properly described. Additionally, these PSFs are to be decomposed into PCs. Finally, the entire FOV is deconvolved globally using deconvolution algorithms which account for uncertainties involved in local estimates of the PSFs. Our approach is tested on simulated, single-conjugate AO data.

  7. Sensitivity analysis on an AC600 aluminum skin component

    NASA Astrophysics Data System (ADS)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  8. Derivation of Boundary Manikins: A Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  9. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    NASA Astrophysics Data System (ADS)

    Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie

    2013-08-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches.

  10. Component analysis of dental porcelain for assisting dental identification.

    PubMed

    Aboshi, H; Takahashi, T; Komuro, T

    2006-12-01

    The fluorescence of porcelain crowns recovered from the mouth of an unknown murder victim, and several control porcelain samples, were examined by fluorescent examination lamps. The fluorescence from two of the control samples was quite similar to that from the porcelain crowns recovered from the victim. To increase the objectivity of the results by quantitative analysis, the composition of each porcelain crown and control sample was also evaluated by wave dispersion X-ray microanalyser. The elements detected from the porcelain crowns of the victim matched those of two of the porcelain samples. Later, the antemortem dental records and radiographs of the victim were obtained through a dentist, who had recognized the name of the porcelain manufacturer in a postmortem dental information request placed on the Japanese Dental Association web page. Although component analysis of dental porcelain may be an effective means of assisting dental identification, a more rapid and non-destructive analysis for detecting the elements is required. The energy dispersive X-ray fluorescence (EDXRF) spectrometer was used for a pilot study of identification of porcelain composition.

  11. Prognostic Health Monitoring System: Component Selection Based on Risk Criteria and Economic Benefit Assessment

    SciTech Connect

    Binh T. Pham; Vivek Agarwal; Nancy J Lybeck; Magdy S Tawfik

    2012-05-01

    Prognostic health monitoring (PHM) is a proactive approach to monitor the ability of structures, systems, and components (SSCs) to withstand structural, thermal, and chemical loadings over the SSCs planned service lifespans. The current efforts to extend the operational license lifetime of the aging fleet of U.S. nuclear power plants from 40 to 60 years and beyond can benefit from a systematic application of PHM technology. Implementing a PHM system would strengthen the safety of nuclear power plants, reduce plant outage time, and reduce operation and maintenance costs. However, a nuclear power plant has thousands of SSCs, so implementing a PHM system that covers all SSCs requires careful planning and prioritization. This paper therefore focuses on a component selection that is based on the analysis of a component's failure probability, risk, and cost. Ultimately, the decision on component selection depend on the overall economical benefits arising from safety and operational considerations associated with implementing the PHM system.

  12. A Local Learning Rule for Independent Component Analysis

    PubMed Central

    Isomura, Takuya; Toyoizumi, Taro

    2016-01-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661

  13. A Local Learning Rule for Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Isomura, Takuya; Toyoizumi, Taro

    2016-06-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering.

  14. A component-labeling algorithm based on contour tracing

    NASA Astrophysics Data System (ADS)

    Qiu, Liudong; Li, Zushu

    2007-12-01

    A new method for finding connected components from binary images is presented in this paper. The main step of this method is to use a contour tracing technique to detect component contours, and use the information of contour to fill in interior areas. All the component points are traced by this algorithm in a single pass and are assigned either a new label or the same label of the contour pixels. Comparative experiment results show that Our algorithm, moreover, is a fast method that not only labels components but also extracts component contours at the same time, which proves to be more useful than those algorithms that only label components.

  15. Efficacy-oriented compatibility for component-based Chinese medicine

    PubMed Central

    Zhang, Jun-hua; Zhu, Yan; Fan, Xiao-hui; Zhang, Bo-li

    2015-01-01

    Single-target drugs have not achieved satisfactory therapeutic effects for complex diseases involving multiple factors. Instead, innovations in recent drug research and development have revealed the emergence of compound drugs, such as cocktail therapies and “polypills”, as the frontier in new drug development. A traditional Chinese medicine (TCM) prescription that is usually composed of several medicinal herbs can serve a typical representative of compound medicines. Although the traditional compatibility theory of TCM cannot be well expressed using modern scientific language nowadays, the fundamental purpose of TCM compatibility can be understood as promoting efficacy and reducing toxicity. This paper introduces the theory and methods of efficacy-oriented compatibility for developing component-based Chinese medicines. PMID:25864650

  16. Rapid Laser Prototyping Of Polymer-Based Nanoplasmonic Components

    NASA Astrophysics Data System (ADS)

    Stepanov, A. L.; Kiyan, R.; Reinhardt, C.; Seidel, A.; Pas-Singer, S.; Chichkov, B. N.

    Renewed and growing interest in the field of surface plasmon polaritons (SPPs) comes from a rapid advance of nanostructuring technologies. The application of two-photon polymerization technique for the fabrication of dielectric and metallic SPP-structures, which can be used for localization, guiding, and manipulation of SPPs waves on a subwavelength scale, is studied. This technology is based on nonlinear absorption of near-infrared femtosecond laser pulses. Excitation, propagation, and interaction of SPP waves with nanostructures are controlled and studied by leakage radiation imaging. It is demonstrated that created nanostructures on metal film are very efficient for the excitation and focusing of SPPs. Examples of passive and active SPP components are presented and discussed.

  17. Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi

    A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.

  18. A new three-dimensional topology optimization method based on moving morphable components (MMCs)

    NASA Astrophysics Data System (ADS)

    Zhang, Weisheng; Li, Dong; Yuan, Jie; Song, Junfu; Guo, Xu

    2017-04-01

    In the present paper, a new method for solving three-dimensional topology optimization problem is proposed. This method is constructed under the so-called moving morphable components based solution framework. The novel aspect of the proposed method is that a set of structural components is introduced to describe the topology of a three-dimensional structure and the optimal structural topology is found by optimizing the layout of the components explicitly. The standard finite element method with ersatz material is adopted for structural response analysis and the shape sensitivity analysis only need to be carried out along the structural boundary. Compared to the existing methods, the description of structural topology is totally independent of the finite element/finite difference resolution in the proposed solution framework and therefore the number of design variables can be reduced substantially. Some widely investigated benchmark examples, in the three-dimensional topology optimization designs, are presented to demonstrate the effectiveness of the proposed approach.

  19. Extremum-seeking based antiskid control and functional principal components

    NASA Astrophysics Data System (ADS)

    Tunay, Ilker

    The first part of this work extends principal component analysis (PCA) to random elements of abstract Hilbert spaces. Using only standard functional analysis, it is shown that the optimal PCA subspace is spanned by the eigenvectors of the covariance operator and the linear variety that minimizes average squared error contains the mean of the distribution. An immediate application is converting a nonlinear parametrization of a dynamical system model to an optimal linear one. This makes possible adaptive control of such systems by using established linear estimation techniques. The second part describes the modeling of an electrohydraulic pressure servo valve and brake hydraulic system, and the design of an inner-loop controller which can be used with independent antiskid or auto-brake controllers. The effects of connecting lines on stability and performance are explicitly taken into account in control design by using analytical solutions to two-dimensional viscous compressible model of fluid motion in the pipes. The modal approximation technique is used in the simulations. In order to facilitate control design, singular perturbation analysis is employed to reduce the order of the model in a systematic fashion. Combining partial feedback linearization and linear H-infinity control, stability robustness against oil parameter variations and component wear is guaranteed. The closed-loop response is almost linear, fast, sufficiently damped, consistent over the whole operating range and the asymmetry between filling and dumping is significantly reduced. The third part gives an overview of extremum-seeking control and presents an antiskid controller for transport aircraft. The controller does not assume knowledge of tire-runway friction characteristics. Information about the local slope of the friction coefficient function is obtained from phase difference measurements of an injected sine wave. The Popov criterion is used to show robust stability. A realistic model of the

  20. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  1. Analysis of Femoral Components of Cemented Total Hip Arthroplasty

    NASA Astrophysics Data System (ADS)

    Singh, Shantanu; Harsha, A. P.

    2016-10-01

    There have been continuous on-going revisions in design of prosthesis in Total Hip Arthroplasty (THA) to improve the endurance of hip replacement. In the present work, Finite Element Analysis was performed on cemented THA with CoCrMo trapezoidal, CoCrMo circular, Ti6Al4V trapezoidal and Ti6Al4V circular stem. It was observed that cross section and material of femoral stem proved to be critical parameters for stress distribution in femoral components, distribution of interfacial stress and micro movements. In the first part of analysis, designs were investigated for micro movements and stress developed, for different stem materials. Later part of the analysis focused on investigations with respect to different stem cross sections. Femoral stem made of Titanium alloy (Ti6Al4V) resulted in larger debonding of stem at cement-stem interface and increased stress within the cement mantle in contrast to chromium alloy (CoCrMo) stem. Thus, CoCrMo proved to be a better choice for cemented THA. Comparison between CoCrMo femoral stem of trapezium and circular cross section showed that trapezoidal stem experiences lesser sliding and debonding at interfaces than circular cross section stem. Also, trapezium cross section generated lower peak stress in femoral stem and cortical femur. In present study, femur head with diameter of 36 mm was considered for the analysis in order to avoid dislocation of the stem. Also, metallic femur head was coupled with cross linked polyethylene liner as it experiences negligible wear compared to conventional polyethylene liner and unlike metallic liner it is non carcinogenic.

  2. 78 FR 6344 - Certain Wireless Communications Base Stations and Components Thereof Notice of Receipt of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... COMMISSION Certain Wireless Communications Base Stations and Components Thereof Notice of Receipt of... received a complaint entitled Certain Wireless Communications Base Stations and Components Thereof, DN 2934... the sale within the United States after importation of certain wireless communications base...

  3. Principal component analysis of indocyanine green fluorescence dynamics for diagnosis of vascular diseases

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee

    2015-03-01

    Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.

  4. Analysis of adaptive laser scanning optical system with focus-tunable components

    NASA Astrophysics Data System (ADS)

    Pokorný, P.; Mikš, A.; Novák, J.; Novák, P.

    2015-05-01

    This work presents a primary analysis of an adaptive laser scanner based on two-mirror beam-steering device and focustunable components (lenses with tunable focal length). It is proposed an optical scheme of an adaptive laser scanner, which can focus the laser beam in a continuous way to a required spatial position using the lens with tunable focal length. This work focuses on a detailed analysis of the active optical or opto-mechanical components (e.g. focus-tunable lenses) mounted in the optical systems of laser scanners. The algebraic formulas are derived for ray tracing through different configurations of the scanning optical system and one can calculate angles of scanner mirrors and required focal length of the tunable-focus component provided that the position of the focused beam in 3D space is given with a required tolerance. Computer simulations of the proposed system are performed using MATLAB.

  5. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  6. A Principal Component Analysis of the Diffuse Interstellar Bands

    NASA Astrophysics Data System (ADS)

    Ensor, T.; Cami, J.; Bhatt, N. H.; Soddu, A.

    2017-02-01

    We present a principal component (PC) analysis of 23 line-of-sight parameters (including the strengths of 16 diffuse interstellar bands, DIBs) for a well-chosen sample of single-cloud sightlines representing a broad range of environmental conditions. Our analysis indicates that the majority (∼93%) of the variations in the measurements can be captured by only four parameters The main driver (i.e., the first PC) is the amount of DIB-producing material in the line of sight, a quantity that is extremely well traced by the equivalent width of the λ5797 DIB. The second PC is the amount of UV radiation, which correlates well with the λ5797/λ5780 DIB strength ratio. The remaining two PCs are more difficult to interpret, but are likely related to the properties of dust in the line of sight (e.g., the gas-to-dust ratio). With our PCA results, the DIBs can then be used to estimate these line-of-sight parameters.

  7. Residual Strength Analysis Methodology: Laboratory Coupons to Structural Components

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Newman, J. C., Jr.; Starnes, J. H., Jr.; Rose, C. A.; Young, R. D.; Seshadri, B. R.

    2000-01-01

    The NASA Aircraft Structural Integrity (NASIP) and Airframe Airworthiness Assurance/Aging Aircraft (AAA/AA) Programs have developed a residual strength prediction methodology for aircraft fuselage structures. This methodology has been experimentally verified for structures ranging from laboratory coupons up to full-scale structural components. The methodology uses the critical crack tip opening angle (CTOA) fracture criterion to characterize the fracture behavior and a material and a geometric nonlinear finite element shell analysis code to perform the structural analyses. The present paper presents the results of a study to evaluate the fracture behavior of 2024-T3 aluminum alloys with thickness of 0.04 inches to 0.09 inches. The critical CTOA and the corresponding plane strain core height necessary to simulate through-the-thickness effects at the crack tip in an otherwise plane stress analysis, were determined from small laboratory specimens. Using these parameters, the CTOA fracture criterion was used to predict the behavior of middle crack tension specimens that were up to 40 inches wide, flat panels with riveted stiffeners and multiple-site damage cracks, 18-inch diameter pressurized cylinders, and full scale curved stiffened panels subjected to internal pressure and mechanical loads.

  8. The use of principle component and cluster analyses to differentiate banana pulp flours based on starch and dietary fiber components.

    PubMed

    Ramli, Saifullah Bin; Alkarkhi, Abbas F M; Yong, Yeoh Shin; Easa, Azhar Mat

    2009-01-01

    Flour prepared from green and ripe Cavendish and Dream banana fruits were assessed for total starch, digestible starch, resistant starch, total dietary fiber, soluble dietary fiber and insoluble dietary fiber. Principle component analysis identified only one component responsible for explaining 83.83% of the total variance in the starch and dietary fiber components data to indicate that ripe banana flour had different characteristics from the green. Cluster analysis applied on similar data obtained two statistically significant clusters of green and ripe banana to indicate difference in behaviors according to the stages of ripeness. In conclusion, starch and dietary fiber components could be used to discriminate between flour prepared from fruits of different stage of ripeness. Results are also suggestive of the potential of green as well as the ripe banana flour as functional ingredients in food.

  9. Spatiotemporal filtering for regional GPS network in China using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ming, Feng; Yang, Yuanxi; Zeng, Anmin; Zhao, Bin

    2017-04-01

    Removal of the common mode error (CME) is a routine procedure in postprocessing regional GPS network observations, which is commonly performed using principal component analysis (PCA). PCA decomposes a network time series into a group of modes, where each mode comprises a common temporal function and corresponding spatial response based on second-order statistics (variance and covariance). However, the probability distribution function of a GPS time series is non-Gaussian; therefore, the largest variances do not correspond to the meaningful axes, and the PCA-derived components may not have an obvious physical meaning. In this study, the CME was assumed statistically independent of other errors, and it was extracted using independent component analysis (ICA), which involves higher-order statistics. First, the ICA performance was tested using a simulated example and compared with PCA and stacking methods. The existence of strong local effects on some stations causes significant large spatial responses and, therefore, a strategy based on median and interquartile range statistics was proposed to identify abnormal sites. After discarding abnormal sites, two indices based on the analysis of the spatial responses of all sites in each independent component (east, north, and vertical) were used to define the CME quantitatively. Continuous GPS coordinate time series spanning ˜ 4.5 years obtained from 259 stations of the Tectonic and Environmental Observation Network of Mainland China (CMONOC II) were analyzed using both PCA and ICA methods and their results compared. The results suggest that PCA is susceptible to deriving an artificial spatial structure, whereas ICA separates the CME from other errors reliably. Our results demonstrate that the spatial characteristics of the CME for CMONOC II are not uniform for the east, north, and vertical components, but have an obvious north-south or east-west distribution. After discarding 84 abnormal sites and performing spatiotemporal

  10. Spatiotemporal filtering for regional GPS network in China using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ming, Feng; Yang, Yuanxi; Zeng, Anmin; Zhao, Bin

    2016-11-01

    Removal of the common mode error (CME) is a routine procedure in postprocessing regional GPS network observations, which is commonly performed using principal component analysis (PCA). PCA decomposes a network time series into a group of modes, where each mode comprises a common temporal function and corresponding spatial response based on second-order statistics (variance and covariance). However, the probability distribution function of a GPS time series is non-Gaussian; therefore, the largest variances do not correspond to the meaningful axes, and the PCA-derived components may not have an obvious physical meaning. In this study, the CME was assumed statistically independent of other errors, and it was extracted using independent component analysis (ICA), which involves higher-order statistics. First, the ICA performance was tested using a simulated example and compared with PCA and stacking methods. The existence of strong local effects on some stations causes significant large spatial responses and, therefore, a strategy based on median and interquartile range statistics was proposed to identify abnormal sites. After discarding abnormal sites, two indices based on the analysis of the spatial responses of all sites in each independent component (east, north, and vertical) were used to define the CME quantitatively. Continuous GPS coordinate time series spanning ˜ 4.5 years obtained from 259 stations of the Tectonic and Environmental Observation Network of Mainland China (CMONOC II) were analyzed using both PCA and ICA methods and their results compared. The results suggest that PCA is susceptible to deriving an artificial spatial structure, whereas ICA separates the CME from other errors reliably. Our results demonstrate that the spatial characteristics of the CME for CMONOC II are not uniform for the east, north, and vertical components, but have an obvious north-south or east-west distribution. After discarding 84 abnormal sites and performing spatiotemporal

  11. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  12. Biochemical component identification by light scattering techniques in whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-03-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins (albumin, interferon, C reactive protein), microelements (Na+, Ca+), antibiotic of different generations, in both single and multi component solutions under varied in wide range concentration are represented. Analysis has been performed on the light scattering parameters of whispering gallery mode (WGM) optical resonance based sensor with dielectric microspheres from glass and PMMA as sensitive elements fixed by spin - coating techniques in adhesive layer on the surface of substrate or directly on the coupling element. Sensitive layer was integrated into developed fluidic cell with a digital syringe. Light from tuneable laser strict focusing on and scattered by the single microsphere was detected by a CMOS camera. The image was filtered for noise reduction and integrated on two coordinates for evaluation of integrated energy of a measured signal. As the entrance data following signal parameters were used: relative (to a free spectral range) spectral shift of frequency of WGM optical resonance in microsphere and relative efficiency of WGM excitation obtained within a free spectral range which depended on both type and concentration of investigated agents. Multiplexing on parameters and components has been realized using spatial and spectral parameters of scattered by microsphere light with developed data processing. Biochemical component classification and identification of agents under investigation has been performed by network analysis techniques based on probabilistic network and multilayer perceptron. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis.

  13. Correcting waveform bias using principal component analysis: Applications in multicentre motion analysis studies.

    PubMed

    Clouthier, Allison L; Bohm, Eric R; Rudan, John F; Shay, Barbara L; Rainbow, Michael J; Deluzio, Kevin J

    2017-01-01

    Multicentre studies are rare in three dimensional motion analyses due to challenges associated with combining waveform data from different centres. Principal component analysis (PCA) is a statistical technique that can be used to quantify variability in waveform data and identify group differences. A correction technique based on PCA is proposed that can be used in post processing to remove nuisance variation introduced by the differences between centres. Using this technique, the waveform bias that exists between the two datasets is corrected such that the means agree. No information is lost in the individual datasets, but the overall variability in the combined data is reduced. The correction is demonstrated on gait kinematics with synthesized crosstalk and on gait data from knee arthroplasty patients collected in two centres. The induced crosstalk was successfully removed from the knee joint angle data. In the second example, the removal of the nuisance variation due to the multicentre data collection allowed significant differences in implant type to be identified. This PCA-based technique can be used to correct for differences between waveform datasets in post processing and has the potential to enable multicentre motion analysis studies.

  14. GNSS Vertical Coordinate Time Series Analysis Using Single-Channel Independent Component Analysis Method

    NASA Astrophysics Data System (ADS)

    Peng, Wei; Dai, Wujiao; Santerre, Rock; Cai, Changsheng; Kuang, Cuilin

    2017-02-01

    Daily vertical coordinate time series of Global Navigation Satellite System (GNSS) stations usually contains tectonic and non-tectonic deformation signals, residual atmospheric delay signals, measurement noise, etc. In geophysical studies, it is very important to separate various geophysical signals from the GNSS time series to truthfully reflect the effect of mass loadings on crustal deformation. Based on the independence of mass loadings, we combine the Ensemble Empirical Mode Decomposition (EEMD) with the Phase Space Reconstruction-based Independent Component Analysis (PSR-ICA) method to analyze the vertical time series of GNSS reference stations. In the simulation experiment, the seasonal non-tectonic signal is simulated by the sum of the correction of atmospheric mass loading and soil moisture mass loading. The simulated seasonal non-tectonic signal can be separated into two independent signals using the PSR-ICA method, which strongly correlated with atmospheric mass loading and soil moisture mass loading, respectively. Likewise, in the analysis of the vertical time series of GNSS reference stations of Crustal Movement Observation Network of China (CMONOC), similar results have been obtained using the combined EEMD and PSR-ICA method. All these results indicate that the EEMD and PSR-ICA method can effectively separate the independent atmospheric and soil moisture mass loading signals and illustrate the significant cause of the seasonal variation of GNSS vertical time series in the mainland of China.

  15. The Use of Exploratory Factor Analysis and Principal Components Analysis in Communication Research.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Dailey, Rene; Lemus, Daisy

    2002-01-01

    Discusses the distinct purposes of principal components analysis (PCA) and exploratory factor analysis (EFA), using two data sets as examples. Reviews the use of each technique in three major communication journals: "Communication Monographs,""Human Communication Research," and "Communication Research." Finds that the…

  16. An Integrated Facet-Based Library for Arbitrary Software Components

    NASA Astrophysics Data System (ADS)

    Schmidt, Matthias; Polowinski, Jan; Johannes, Jendrik; Fernández, Miguel A.

    Reuse is an important means of reducing costs and effort during the development of complex software systems. A major challenge is to find suitable components in a large library with reasonable effort. This becomes even harder in today's development practice where a variety of artefacts such as models and documents play an equally important role as source code. Thus, different types of heterogeneous components exist and require consideration in a component search process. One flexible approach to structure (software component) libraries is faceted classification. Faceted classifications and in particular faceted browsing are nowadays widely used in online systems. This paper takes a fresh approach towards using faceted classification in heterogeneous software component libraries by transferring faceted browsing concepts from the web to software component libraries. It presents an architecture and implementation of such a library. This implementation is used to evaluate the applicability of facets in the context of an industry-driven case study.

  17. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc.

  18. A new rolling bearing fault diagnosis method based on GFT impulse component extraction

    NASA Astrophysics Data System (ADS)

    Ou, Lu; Yu, Dejie; Yang, Hanjian

    2016-12-01

    Periodic impulses are vital indicators of rolling bearing faults. The extraction of impulse components from rolling bearing vibration signals is of great importance for fault diagnosis. In this paper, vibration signals are taken as the path graph signals in a manifold perspective, and the Graph Fourier Transform (GFT) of vibration signals are investigated from the graph spectrum domain, which are both introduced into the vibration signal analysis. To extract the impulse components efficiently, a new adjacency weight matrix is defined, and then the GFT of the impulse component and harmonic component in the rolling bearing vibration signals are analyzed. Furthermore, as the GFT graph spectrum of the impulse component is mainly concentrated in the high-order region, a new rolling bearing fault diagnosis method based on GFT impulse component extraction is proposed. In the proposed method, the GFT of a vibration signal is firstly performed, and its graph spectrum coefficients in the high-order region are extracted to reconstruct different impulse components. Next, the Hilbert envelope spectra of these impulse components are calculated, and the envelope spectrum values at the fault characteristic frequency are arranged in order. Furthermore, the envelope spectrum with the maximum value at the fault characteristic frequency is selected as the final result, from which the rolling bearing fault can be diagnosed. Finally, an index KR, which is the product of the kurtosis and Hilbert envelope spectrum fault feature ratio of the extracted impulse component, is put forward to measure the performance of the proposed method. Simulations and experiments are utilized to demonstrate the feasibility and effectiveness of the proposed method.

  19. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2016-02-08

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  20. Sparse kernel entropy component analysis for dimensionality reduction of neuroimaging data.

    PubMed

    Jiang, Qikun; Shi, Jun

    2014-01-01

    The neuroimaging data typically has extremely high dimensions. Therefore, dimensionality reduction is commonly used to extract discriminative features. Kernel entropy component analysis (KECA) is a newly developed data transformation method, where the key idea is to preserve the most estimated Renyi entropy of the input space data set via a kernel-based estimator. Despite its good performance, KECA still suffers from the problem of low computational efficiency for large-scale data. In this paper, we proposed a sparse KECA (SKECA) algorithm with the recursive divide-and-conquer based solution, and then applied it to perform dimensionality reduction of neuroimaging data for classification of the Alzheimer's disease (AD). We compared the SKECA with KECA, principal component analysis (PCA), kernel PCA (KPCA) and sparse KPCA. The experimental results indicate that the proposed SKECA has most superior performance to all other algorithms when extracting discriminative features from neuroimaging data for AD classification.

  1. Weighted Kernel Entropy Component Analysis for Fault Diagnosis of Rolling Bearings

    PubMed Central

    Zhou, Hongdi; Shi, Tielin; Liao, Guanglan; Xuan, Jianping; Duan, Jie; Su, Lei; He, Zhenzhi; Lai, Wuxing

    2017-01-01

    This paper presents a supervised feature extraction method called weighted kernel entropy component analysis (WKECA) for fault diagnosis of rolling bearings. The method is developed based on kernel entropy component analysis (KECA) which attempts to preserve the Renyi entropy of the data set after dimension reduction. It makes full use of the labeled information and introduces a weight strategy in the feature extraction. The class-related weights are introduced to denote differences among the samples from different patterns, and genetic algorithm (GA) is implemented to seek out appropriate weights for optimizing the classification results. The features based on wavelet packet decomposition are derived from the original signals. Then the intrinsic geometric features extracted by WKECA are fed into the support vector machine (SVM) classifier to recognize different operating conditions of bearings, and we obtain the overall accuracy (97%) for the experimental samples. The experimental results demonstrated the feasibility and effectiveness of the proposed method. PMID:28335480

  2. Assessment of models for pedestrian dynamics with functional principal component analysis

    NASA Astrophysics Data System (ADS)

    Chraibi, Mohcine; Ensslen, Tim; Gottschalk, Hanno; Saadi, Mohamed; Seyfried, Armin

    2016-06-01

    Many agent based simulation approaches have been proposed for pedestrian flow. As such models are applied e.g. in evacuation studies, the quality and reliability of such models is of vital interest. Pedestrian trajectories are functional data and thus functional principal component analysis is a natural tool to assess the quality of pedestrian flow models beyond average properties. In this article we conduct functional Principal Component Analysis (PCA) for the trajectories of pedestrians passing through a bottleneck. In this way it is possible to assess the quality of the models not only on basis of average values but also by considering its fluctuations. We benchmark two agent based models of pedestrian flow against the experimental data using PCA average and stochastic features. Functional PCA proves to be an efficient tool to detect deviation between simulation and experiment and to assess quality of pedestrian models.

  3. Weighted Kernel Entropy Component Analysis for Fault Diagnosis of Rolling Bearings.

    PubMed

    Zhou, Hongdi; Shi, Tielin; Liao, Guanglan; Xuan, Jianping; Duan, Jie; Su, Lei; He, Zhenzhi; Lai, Wuxing

    2017-03-18

    This paper presents a supervised feature extraction method called weighted kernel entropy component analysis (WKECA) for fault diagnosis of rolling bearings. The method is developed based on kernel entropy component analysis (KECA) which attempts to preserve the Renyi entropy of the data set after dimension reduction. It makes full use of the labeled information and introduces a weight strategy in the feature extraction. The class-related weights are introduced to denote differences among the samples from different patterns, and genetic algorithm (GA) is implemented to seek out appropriate weights for optimizing the classification results. The features based on wavelet packet decomposition are derived from the original signals. Then the intrinsic geometric features extracted by WKECA are fed into the support vector machine (SVM) classifier to recognize different operating conditions of bearings, and we obtain the overall accuracy (97%) for the experimental samples. The experimental results demonstrated the feasibility and effectiveness of the proposed method.

  4. Digital photogrammetry for quantitative wear analysis of retrieved TKA components.

    PubMed

    Grochowsky, J C; Alaways, L W; Siskey, R; Most, E; Kurtz, S M

    2006-11-01

    The use of new materials in knee arthroplasty demands a way in which to accurately quantify wear in retrieved components. Methods such as damage scoring, coordinate measurement, and in vivo wear analysis have been used in the past. The limitations in these methods illustrate a need for a different methodology that can accurately quantify wear, which is relatively easy to perform and uses a minimal amount of expensive equipment. Off-the-shelf digital photogrammetry represents a potentially quick and easy alternative to what is readily available. Eighty tibial inserts were visually examined for front and backside wear and digitally photographed in the presence of two calibrated reference fields. All images were segmented (via manual and automated algorithms) using Adobe Photoshop and National Institute of Health ImageJ. Finally, wear was determined using ImageJ and Rhinoceros software. The absolute accuracy of the method and repeatability/reproducibility by different observers were measured in order to determine the uncertainty of wear measurements. To determine if variation in wear measurements was due to implant design, 35 implants of the three most prevalent designs were subjected to retrieval analysis. The overall accuracy of area measurements was 97.8%. The error in automated segmentation was found to be significantly lower than that of manual segmentation. The photogrammetry method was found to be reasonably accurate and repeatable in measuring 2-D areas and applicable to determining wear. There was no significant variation in uncertainty detected among different implant designs. Photogrammetry has a broad range of applicability since it is size- and design-independent. A minimal amount of off-the-shelf equipment is needed for the procedure and no proprietary knowledge of the implant is needed.

  5. Components for automated microfluidics sample preparation and analysis

    NASA Astrophysics Data System (ADS)

    Archer, M.; Erickson, J. S.; Hilliard, L. R.; Howell, P. B., Jr.; Stenger, D. A.; Ligler, F. S.; Lin, B.

    2008-02-01

    The increasing demand for portable devices to detect and identify pathogens represents an interdisciplinary effort between engineering, materials science, and molecular biology. Automation of both sample preparation and analysis is critical for performing multiplexed analyses on real world samples. This paper selects two possible components for such automated portable analyzers: modified silicon structures for use in the isolation of nucleic acids and a sheath flow system suitable for automated microflow cytometry. Any detection platform that relies on the genetic content (RNA and DNA) present in complex matrices requires careful extraction and isolation of the nucleic acids in order to ensure their integrity throughout the process. This sample pre-treatment step is commonly performed using commercially available solid phases along with various molecular biology techniques that require multiple manual steps and dedicated laboratory space. Regardless of the detection scheme, a major challenge in the integration of total analysis systems is the development of platforms compatible with current isolation techniques that will ensure the same quality of nucleic acids. Silicon is an ideal candidate for solid phase separations since it can be tailored structurally and chemically to mimic the conditions used in the laboratory. For analytical purposes, we have developed passive structures that can be used to fully ensheath one flow stream with another. As opposed to traditional flow focusing methods, our sheath flow profile is truly two dimensional, making it an ideal candidate for integration into a microfluidic flow cytometer. Such a microflow cytometer could be used to measure targets captured on either antibody- or DNA-coated beads.

  6. Construction Formula of Biological Age Using the Principal Component Analysis

    PubMed Central

    Jia, Linpei; Zhang, Weiguang; Jia, Rufu

    2016-01-01

    The biological age (BA) equation is a prediction model that utilizes an algorithm to combine various biological markers of ageing. Different from traditional concepts, the BA equation does not emphasize the importance of a golden index but focuses on using indices of vital organs to represent the senescence of whole body. This model has been used to assess the ageing process in a more precise way and may predict possible diseases better as compared with the chronological age (CA). The principal component analysis (PCA) is applied as one of the common and frequently used methods in the construction of the BA formula. Compared with other methods, PCA has its own study procedures and features. Herein we summarize the up-to-date knowledge about the BA formula construction and discuss the influential factors, so as to give an overview of BA estimate by PCA, including composition of samples, choices of test items, and selection of ageing biomarkers. We also discussed the advantages and disadvantages of PCA with reference to the construction mechanism, accuracy, and practicability of several common methods in the construction of the BA formula. PMID:28050560

  7. Dissection of the hormetic curve: analysis of components and mechanisms.

    PubMed

    Lushchak, Volodymyr I

    2014-07-01

    The relationship between the dose of an effector and the biological response frequently is not described by a linear function and, moreover, in some cases the dose-response relationship may change from positive/adverse to adverse/positive with increasing dose. This complicated relationship is called "hormesis". This paper provides a short analysis of the concept along with a description of used approaches to characterize hormetic relationships. The whole hormetic curve can be divided into three zones: I - a lag-zone where no changes are observed with increasing dose; II - a zone where beneficial/adverse effects are observed, and III - a zone where the effects are opposite to those seen in zone II. Some approaches are proposed to analyze the molecular components involved in the development of the hormetic character of dose-response relationships with the use of specific genetic lines or inhibitors of regulatory pathways. The discussion is then extended to suggest a new parameter (half-width of the hormetic curve at zone II) for quantitative characterization of the hormetic curve. The problems limiting progress in the development of the hormesis concept such as low reproducibility and predictability may be solved, at least partly, by deciphering the molecular mechanisms underlying the hormetic dose-effect relationship.

  8. Significance-linked connected component analysis for wavelet image coding.

    PubMed

    Chai, B B; Vass, J; Zhuang, X

    1999-01-01

    Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.

  9. Dissection of the Hormetic Curve: Analysis of Components and Mechanisms

    PubMed Central

    Lushchak, Volodymyr I.

    2014-01-01

    The relationship between the dose of an effector and the biological response frequently is not described by a linear function and, moreover, in some cases the dose-response relationship may change from positive/adverse to adverse/positive with increasing dose. This complicated relationship is called “hormesis”. This paper provides a short analysis of the concept along with a description of used approaches to characterize hormetic relationships. The whole hormetic curve can be divided into three zones: I – a lag-zone where no changes are observed with increasing dose; II – a zone where beneficial/adverse effects are observed, and III – a zone where the effects are opposite to those seen in zone II. Some approaches are proposed to analyze the molecular components involved in the development of the hormetic character of dose-response relationships with the use of specific genetic lines or inhibitors of regulatory pathways. The discussion is then extended to suggest a new parameter (half-width of the hormetic curve at zone II) for quantitative characterization of the hormetic curve. The problems limiting progress in the development of the hormesis concept such as low reproducibility and predictability may be solved, at least partly, by deciphering the molecular mechanisms underlying the hormetic dose-effect relationship. PMID:25249836

  10. Construction Formula of Biological Age Using the Principal Component Analysis.

    PubMed

    Jia, Linpei; Zhang, Weiguang; Jia, Rufu; Zhang, Hongliang; Chen, Xiangmei

    2016-01-01

    The biological age (BA) equation is a prediction model that utilizes an algorithm to combine various biological markers of ageing. Different from traditional concepts, the BA equation does not emphasize the importance of a golden index but focuses on using indices of vital organs to represent the senescence of whole body. This model has been used to assess the ageing process in a more precise way and may predict possible diseases better as compared with the chronological age (CA). The principal component analysis (PCA) is applied as one of the common and frequently used methods in the construction of the BA formula. Compared with other methods, PCA has its own study procedures and features. Herein we summarize the up-to-date knowledge about the BA formula construction and discuss the influential factors, so as to give an overview of BA estimate by PCA, including composition of samples, choices of test items, and selection of ageing biomarkers. We also discussed the advantages and disadvantages of PCA with reference to the construction mechanism, accuracy, and practicability of several common methods in the construction of the BA formula.

  11. The Effectiveness of Blind Source Separation Using Independent Component Analysis for GNSS Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Dong, Danan; Chen, Wen

    2016-04-01

    Due to the development of GNSS technology and the improvement of its positioning accuracy, observational data obtained by GNSS is widely used in Earth space geodesy and geodynamics research. Whereas the GNSS time series data of observation stations contains a plenty of information. This includes geographical space changes, deformation of the Earth, the migration of subsurface material, instantaneous deformation of the Earth, weak deformation and other blind signals. In order to decompose some instantaneous deformation underground, weak deformation and other blind signals hided in GNSS time series, we apply Independent Component Analysis (ICA) to daily station coordinate time series of the Southern California Integrated GPS Network. As ICA is based on the statistical characteristics of the observed signal. It uses non-Gaussian and independence character to process time series to obtain the source signal of the basic geophysical events. In term of the post-processing procedure of precise time-series data by GNSS, this paper examines GNSS time series using the principal component analysis (PCA) module of QOCA and ICA algorithm to separate the source signal. Then we focus on taking into account of these two signal separation technologies, PCA and ICA, for separating original signal that related geophysical disturbance changes from the observed signals. After analyzing these separation process approaches, we demonstrate that in the case of multiple factors, PCA exists ambiguity in the separation of source signals, that is the result related is not clear, and ICA will be better than PCA, which means that dealing with GNSS time series that the combination of signal source is unknown is suitable to use ICA.

  12. Internet MEMS design tools based on component technology

    NASA Astrophysics Data System (ADS)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  13. Viscosity of carbon nanotube suspension using artificial neural networks with principal component analysis

    NASA Astrophysics Data System (ADS)

    Yousefi, Fakhri; Karimi, Hajir; Mohammadiyan, Somayeh

    2016-11-01

    This paper applies the model including back-propagation network (BPN) and principal component analysis (PCA) to estimate the effective viscosity of carbon nanotubes suspension. The effective viscosities of multiwall carbon nanotubes suspension are examined as a function of the temperature, nanoparticle volume fraction, effective length of nanoparticle and the viscosity of base fluids using artificial neural network. The obtained results by BPN-PCA model have good agreement with the experimental data.

  14. Demasking the integrated value of discharge - Advanced sensitivity analysis on the components of hydrological models

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin

    2016-04-01

    The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal

  15. Homogenization of soil properties map by Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Valverde Arias, Omar; Garrido, Alberto; Villeta, Maria; Tarquis, Ana Maria

    2016-04-01

    It is widely known that extreme climatic phenomena occur with more intensity and frequency. This fact has put more pressure over farming, becoming very important to implement agriculture risk management policies by governments and institutions. One of the main strategies is transfer risk by agriculture insurance. Agriculture insurance based in indexes has gained importance in the last decade. And consist in a comparison between measured index values with a defined threshold that triggers damage losses. However, based index insurance could not be based on an isolated measurement. It is necessary to be integrated in a complete monitoring system that uses many sources of information and tools. For example, index influence areas, crop production risk maps, crop yields, claim statistics, and so on. To establish index influence area is necessary to have a secondary information that show us homogeneous climatic and soil areas, which inside of each homogeneous classes, index measurements on crops of interest are going to be similar, and in this way reduce basis risk. But it is necessary an efficient method to accomplish this aim, to get homogeneous areas that not depends on only in expert criteria and that could be widely used, for this reason this study asses two conventional agricultural and geographic methods (control and climatic maps) based in expert criteria, and one classical statistical method of multi-factorial analysis (factorial map), all of them to homogenize soil and climatic characteristics. Resulting maps were validated by agricultural and spatial analysis, obtaining very good results in statistical method (Factorial map) that proves to be an efficient and accuracy method that could be used for similar porpoises.

  16. A robust independent component analysis (ICA) model for functional magnetic resonance imaging (fMRI) data

    NASA Astrophysics Data System (ADS)

    Ao, Jingqi; Mitra, Sunanda; Liu, Zheng; Nutter, Brian

    2011-03-01

    The coupling of carefully designed experiments with proper analysis of functional magnetic resonance imaging (fMRI) data provides us with a powerful as well as noninvasive tool to help us understand cognitive processes associated with specific brain regions and hence could be used to detect abnormalities induced by a diseased state. The hypothesisdriven General Linear Model (GLM) and the data-driven Independent Component Analysis (ICA) model are the two most commonly used models for fMRI data analysis. A hybrid ICA-GLM model combines the two models to take advantages of benefits from both models to achieve more accurate mapping of the stimulus-induced activated brain regions. We propose a modified hybrid ICA-GLM model with probabilistic ICA that includes a noise model. In this modified hybrid model, a probabilistic principle component analysis (PPCA)-based component number estimation is used in the ICA stage to extract the intrinsic number of original time courses. In addition, frequency matching is introduced into the time course selection stage, along with temporal correlation, F-test based model fitting estimation, and time course combination, to produce a more accurate design matrix for GLM. A standard fMRI dataset is used to compare the results of applying GLM and the proposed hybrid ICA-GLM in generating activation maps.

  17. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  18. ERP component analysis for rapid image searching in finer categories.

    PubMed

    Xiao, Siyuan; Cai, Bangyu; Jiang, Lei; Wang, Yiweng; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Event-related potentials (ERP)-based image triage (or search) in the context of Rapid Serial Visual Presentation (RSVP) exploits difference in the human brain response to target and distracted stimuli in the form of an image. So far, most paradigms focus on image triage (or search) among rough object categories. In this paper, we explored the possibility and effectiveness of target detection among finer categories like different animals. We analyzed on the difference of ERP components in two image search tasks, a simple-recognition task in which all images of a target are the same and a discriminative-recognition task in which all images are randomly different but belong to the same target category (the same kind of animal). We observed that the P3 amplitude reduced and the P3 latency delayed on the discriminative-recognition condition due to the increased difficulty of identifying different images belonging to the same category. But the average area under ROC curve reached 0.82 which indicated that rapid target detection among finer categories by single-trial ERP were feasible with trivial contribution of N1 and stable contribution of N2 and P3.

  19. A Component Analysis of Positive Behaviour Support Plans

    ERIC Educational Resources Information Center

    McClean, Brian; Grey, Ian

    2012-01-01

    Background: Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Method: Sixty-one staff working with individuals with intellectual disability…

  20. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to access the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to...

  1. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to assess the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to ...

  2. A comparison of principal components using TPCA and nonstationary principal component analysis on daily air-pollutant concentration series

    NASA Astrophysics Data System (ADS)

    Shen, Chenhua

    2017-02-01

    We applied traditional principal component analysis (TPCA) and nonstationary principal component analysis (NSPCA) to determine principal components in the six daily air-pollutant concentration series (SO2, NO2, CO, O3, PM2.5 and PM10) in Nanjing from January 2013 to March 2016. The results show that using TPCA, two principal components can reflect the variance of these series: primary pollutants (SO2, NO2, CO, PM2.5 and PM10) and secondary pollutants (e.g., O3). However, using NSPCA, three principal components can be determined to reflect the detrended variance of these series: 1) a mixture of primary and secondary pollutants, 2) primary pollutants and 3) secondary pollutants. Various approaches can obtain different principal components. This phenomenon is closely related to methods for calculating the cross-correlation between each of the air pollutants. NSPCA is a more applicable, reliable method for analyzing the principal components of a series in the presence of nonstationarity and for a long-range correlation than can TPCA. Moreover, using detrended cross-correlation analysis (DCCA), the cross-correlation between O3 and NO2 is negative at a short timescale and positive at a long timescale. In hourly timescales, O3 is negatively correlated with NO2 due to a photochemical interaction, and in daily timescales, O3 is positively correlated with NO2 because of the decomposition of O3. In monthly timescales, the cross-correlation between O3 with NO2 has similar performance to those of O3 with meteorological elements. DCCA is again shown to be more appropriate for disclosing the cross-correlation between series in the presence of nonstationarity than is Pearson's method. DCCA can improve our understanding of their interactional mechanisms.

  3. Analog optoelectronic independent component analysis for radio frequency signals

    NASA Astrophysics Data System (ADS)

    Baylor, Martha-Elizabeth

    This thesis addresses the problem of blind source separation of signals at radio frequencies. Independent component analysis (ICA), which includes a second-order decorrelation followed by a fourth-order decorrelation, uses signal independence to estimate the original signals from the received mixtures. Until now, ICA has been applied to many applications at or below audio frequencies. The work presented here demonstrates that an optoelectronic implementation using the parallel processing nature of dynamic holography can overcome the computational difficulties associated with algorithmic implementations of ICA. The holographic nature of a photorefractive crystal combined with the non-linearity of an electro-optic modulator in a feedback loop can be described by a nonlinear dynamical equation. The dynamics can be cast in the form of Lotka-Volterra equations used to study the dynamics of competing populations of species. Although this analogy with the animal world is interesting, the dynamical equation associated with the fourth-order decorrelation system is fascinating. The statistics associated with the original signals, rather than an external potential, determine the dynamics of the system. In particular, the system is multistable, metastable, or monostable depending on whether the probability density functions of the original signals are sub-Gaussian, Gaussian, or super-Gaussian, respectively. The multistable solution, which occurs for sub-Gaussian signals, provides the winner-takes-all behavior required to separate signals. This ability to separate sub-Gaussian signals is advantageous since signals modulated on a sinusoidal carrier are sub-Gaussian. The fourth-order decorrelation system achieves greater than 40 dB signal separation on 200 MHz single-frequency sine waves and greater than 20 dB signal separation for 10 MHz bandwidth signals. The system performance is degraded by 10 to 20 dB when mixed electronically due to imperfections in the mixing circuitry

  4. Incorporating principal component analysis into air quality model evaluation

    NASA Astrophysics Data System (ADS)

    Eder, Brian; Bash, Jesse; Foley, Kristen; Pleim, Jon

    2014-01-01

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric - the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42-) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station-grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the probable mechanisms responsible for the unique behavior among bias regimes. A sampling of results indicates that biases are still prevalent in both SO42- and NH4+ simulations that can be attributed to either: 1) cloud processes in the meteorological model utilized by CMAQ, which are found to overestimated convective clouds and precipitation, while underestimating larger-scale resolved clouds that are less likely to precipitate, and 2) biases associated with Midwest NH3 emissions which may be partially ameliorated

  5. Tracing Cattle Breeds with Principal Components Analysis Ancestry Informative SNPs

    PubMed Central

    Lewis, Jamey; Abas, Zafiris; Dadousis, Christos; Lykidis, Dimitrios; Paschou, Peristera; Drineas, Petros

    2011-01-01

    The recent release of the Bovine HapMap dataset represents the most detailed survey of bovine genetic diversity to date, providing an important resource for the design and development of livestock production. We studied this dataset, comprising more than 30,000 Single Nucleotide Polymorphisms (SNPs) for 19 breeds (13 taurine, three zebu, and three hybrid breeds), seeking to identify small panels of genetic markers that can be used to trace the breed of unknown cattle samples. Taking advantage of the power of Principal Components Analysis and algorithms that we have recently described for the selection of Ancestry Informative Markers from genomewide datasets, we present a decision-tree which can be used to accurately infer the origin of individual cattle. In doing so, we present a thorough examination of population genetic structure in modern bovine breeds. Performing extensive cross-validation experiments, we demonstrate that 250-500 carefully selected SNPs suffice in order to achieve close to 100% prediction accuracy of individual ancestry, when this particular set of 19 breeds is considered. Our methods, coupled with the dense genotypic data that is becoming increasingly available, have the potential to become a valuable tool and have considerable impact in worldwide livestock production. They can be used to inform the design of studies of the genetic basis of economically important traits in cattle, as well as breeding programs and efforts to conserve biodiversity. Furthermore, the SNPs that we have identified can provide a reliable solution for the traceability of breed-specific branded products. PMID:21490966

  6. Hybrid integrated photonic components based on a polymer platform

    NASA Astrophysics Data System (ADS)

    Eldada, Louay A.

    2003-06-01

    We report on a polymer-on-silicon optical bench platform that enables the hybrid integration of elemental passive and active optical functions. Planar polymer circuits are produced photolithographically, and slots are formed in them for the insertion of chips and films of a variety of materials. The polymer circuits provide interconnects, static routing elements such as couplers, taps, and multi/demultiplexers, as well as thermo-optically dynamic elements such as switches, variable optical attenuators, and tunable notch filters. Crystal-ion-sliced thin films of lithium niobate are inserted in the polymer circuit for polarization control or for electro-optic modulation. Films of yttrium iron garnet and neodymium iron boron magnets are inserted in order to magneto-optically achieve non-reciprocal operation for isolation and circulation. Indium phosphide and gallium arsenide chips are inserted for light generation, amplification, and detection, as well as wavelength conversion. The functions enabled by this multi-material platform span the range of the building blocks needed in optical circuits, while using the highest-performance material system for each function. We demonstrated complex-functionality photonic components based on this technology, including a metro ring node module and a tunable optical transmitter. The metro ring node chip includes switches, variable optical attenuators, taps, and detectors; it enables optical add/drop multiplexing, power monitoring, and automatic load balancing, and it supports shared and dedicated protection protocols in two-fiber metro ring optical networks. The tunable optical transmitter chip includes a tunable external cavity laser, an isolator, and a high-speed modulator.

  7. Spatiotemporal analysis of GPS time series in vertical direction using independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Dai, Wujiao; Peng, Wei; Meng, Xiaolin

    2015-11-01

    GPS has been widely used in the field of geodesy and geodynamics thanks to its technology development and the improvement of positioning accuracy. A time series observed by GPS in vertical direction usually contains tectonic signals, non-tectonic signals, residual atmospheric delay, measurement noise, etc. Analyzing these information is the basis of crustal deformation research. Furthermore, analyzing the GPS time series and extracting the non-tectonic information are helpful to study the effect of various geophysical events. Principal component analysis (PCA) is an effective tool for spatiotemporal filtering and GPS time series analysis. But as it is unable to extract statistically independent components, PCA is unfavorable for achieving the implicit information in time series. Independent component analysis (ICA) is a statistical method of blind source separation (BSS) and can separate original signals from mixed observations. In this paper, ICA is used as a spatiotemporal filtering method to analyze the spatial and temporal features of vertical GPS coordinate time series in the UK and Sichuan-Yunnan region in China. Meanwhile, the contributions from atmospheric and soil moisture mass loading are evaluated. The analysis of the relevance between the independent components and mass loading with their spatial distribution shows that the signals extracted by ICA have a strong correlation with the non-tectonic deformation, indicating that ICA has a better performance in spatiotemporal analysis.

  8. Estimation and Psychometric Analysis of Component Profile Scores via Multivariate Generalizability Theory

    ERIC Educational Resources Information Center

    Grochowalski, Joseph H.

    2015-01-01

    Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…

  9. Bacillus spore classification via surface-enhanced Raman spectroscopy and principal component analysis.

    PubMed

    Guicheteau, J; Argue, L; Emge, D; Hyre, A; Jacobson, M; Christesen, S

    2008-03-01

    Surface-enhanced Raman spectroscopy (SERS) can provide rapid fingerprinting of biomaterial in a nondestructive manner. The adsorption of colloidal silver to biological material suppresses native biofluorescence while providing electromagnetic surface enhancement of the normal Raman signal. This work validates the applicability of qualitative SER spectroscopy for analysis of bacterial species by utilizing principal component analysis (PCA) to show discrimination of biological threat simulants, based upon multivariate statistical confidence limits bounding known data clusters. Gram-positive Bacillus spores (Bacillus atrophaeus, Bacillus anthracis, and Bacillus thuringiensis) are investigated along with the Gram-negative bacterium Pantoea agglomerans.

  10. Joint Procrustes Analysis for Simultaneous Nonsingular Transformation of Component Score and Loading Matrices

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2009-01-01

    In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…

  11. Electromagnetic crystal based terahertz thermal radiators and components

    NASA Astrophysics Data System (ADS)

    Wu, Ziran

    prototyping approach. Third, an all-dielectric THz waveguide is designed, fabricated and characterized. The design is based on hollow-core EMXT waveguide, and the fabrication is implemented with the THz prototyping method. Characterization results of the waveguide power loss factor show good consistency with the simulation, and waveguide propagation loss as low as 0.03 dB/mm at 105 GHz is demonstrated. Several design parameters are also varied and their impacts on the waveguide performance investigated theoretically. Finally, a THz EMXT antenna based on expanding the defect radius of the EMXT waveguide to a horn shape is proposed and studied. The boresight directivity and main beam angular width of the optimized EMXT horn antenna is comparable with a copper horn antenna of the same dimensions at low frequencies, and much better than the copper horn at high frequencies. The EMXT antenna has been successfully fabricated via the same THz prototyping, and we believe this is the first time an EMXT antenna of this architecture is fabricated. Far-field measurement of the EMXT antenna radiation pattern is undergoing. Also, in order to integrate planar THz solid-state devices (especially source and detector) and THz samples under test with the potential THz micro-system fabricate-able by the prototyping approach, an EMXT waveguide-to-microstrip line transition structure is designed. The structure uses tapered solid dielectric waveguides on both ends to transit THz energy from the EMXT waveguide defect onto the microstrip line. Simulation of the transition structure in a back-to-back configuration yields about -15 dB insertion loss mainly due to the dielectric material loss. The coupling and radiation loss of the transition structure is estimated to be -2.115 dB. The fabrication and characterization of the transition system is currently underway. With all the above THz components realized in the future, integrated THz micro-systems manufactured by the same prototyping technique will be

  12. Data-Parallel Mesh Connected Components Labeling and Analysis

    SciTech Connect

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  13. A component analysis of schedule thinning during functional communication training.

    PubMed

    Betz, Alison M; Fisher, Wayne W; Roane, Henry S; Mintz, Joslyn C; Owen, Todd M

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can effectively address this limitation. However, the extent to which each of these components contributes to the effectiveness of the overall approach remains uncertain. In the current investigation, we evaluated the first component by comparing rates of the FCR and problem behavior under mixed and multiple schedules and evaluated the second component by rapidly switching from dense mixed and multiple schedules to lean multiple schedules without gradually thinning the density of reinforcement. Results indicated that multiple schedules decreased the overall rate of reinforcement for the FCR and maintained the strength of the FCR and low rates of problem behavior without gradually thinning the reinforcement schedule.

  14. Dissecting the molecular structure of the Orion B cloud: insight from principal component analysis

    NASA Astrophysics Data System (ADS)

    Gratier, Pierre; Bron, Emeric; Gerin, Maryvonne; Pety, Jérôme; Guzman, Viviana V.; Orkisz, Jan; Bardeau, Sébastien; Goicoechea, Javier R.; Le Petit, Franck; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Roueff, Evelyne; Sievers, Albrech; Tremblin, Pascal

    2017-03-01

    Context. The combination of wideband receivers and spectrometers currently available in (sub-)millimeter observatories deliver wide-field hyperspectral imaging of the interstellar medium. Tens of spectral lines can be observed over degree wide fields in about 50 h. This wealth of data calls for restating the physical questions about the interstellar medium in statistical terms. Aims: We aim to gain information on the physical structure of the interstellar medium from a statistical analysis of many lines from different species over a large field of view, without requiring detailed radiative transfer or astrochemical modeling. Methods: We coupled a non-linear rescaling of the data with one of the simplest multivariate analysis methods, namely the principal component analysis, to decompose the observed signal into components that we interpret first qualitatively and then quantitatively based on our deep knowledge of the observed region and of the astrochemistry at play. Results: We identify three principal components, linear compositions of line brightness temperatures, that are correlated at various levels with the column density, the volume density and the UV radiation field. Conclusions: When sampling a sufficiently diverse mixture of physical parameters, it is possible to decompose the molecular emission in order to gain physical insight on the observed interstellar medium. This opens a new avenue for future studies of the interstellar medium. Based on observations carried out at the IRAM-30 m single-dish telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).

  15. Analysis of Dynamic Interactions between Different Drivetrain Components with a Detailed Wind Turbine Model

    NASA Astrophysics Data System (ADS)

    Bartschat, A.; Morisse, M.; Mertens, A.; Wenske, J.

    2016-09-01

    The presented work describes a detailed analysis of the dynamic interactions among mechanical and electrical drivetrain components of a modern wind turbine under the influence of parameter variations, different control mechanisms and transient excitations. For this study, a detailed model of a 2MW wind turbine with a gearbox, a permanent magnet synchronous generator and a full power converter has been developed which considers all relevant characteristics of the mechanical and electrical subsystems. This model includes an accurate representation of the aerodynamics and the mechanical properties of the rotor and the complete mechanical drivetrain. Furthermore, a detailed electrical modelling of the generator, the full scale power converter with discrete switching devices, its filters, the transformer and the grid as well as the control structure is considered. The analysis shows that, considering control measures based on active torsional damping, interactions between mechanical and electrical subsystems can significantly affect the loads and thus the individual lifetime of the components.

  16. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  17. Quantitative interferometric microscopic flow cytometer with expanded principal component analysis method

    NASA Astrophysics Data System (ADS)

    Wang, Shouyu; Jin, Ying; Yan, Keding; Xue, Liang; Liu, Fei; Li, Zhenhua

    2014-11-01

    Quantitative interferometric microscopy is used in biological and medical fields and a wealth of applications are proposed in order to detect different kinds of biological samples. Here, we develop a phase detecting cytometer based on quantitative interferometric microscopy with expanded principal component analysis phase retrieval method to obtain phase distributions of red blood cells with a spatial resolution ~1.5 μm. Since expanded principal component analysis method is a time-domain phase retrieval algorithm, it could avoid disadvantages of traditional frequency-domain algorithms. Additionally, the phase retrieval method realizes high-speed phase imaging from multiple microscopic interferograms captured by CCD camera when the biological cells are scanned in the field of view. We believe this method can be a powerful tool to quantitatively measure the phase distributions of different biological samples in biological and medical fields.

  18. Adjustment for population stratification via principal components in association analysis of rare variants.

    PubMed

    Zhang, Yiwei; Guan, Weihua; Pan, Wei

    2013-01-01

    For unrelated samples, principal component (PC) analysis has been established as a simple and effective approach to adjusting for population stratification in association analysis of common variants (CVs, with minor allele frequencies MAF > 5%). However, it is less clear how it would perform in analysis of low-frequency variants (LFVs, MAF between 1% and 5%), or of rare variants (RVs, MAF < 5%). Furthermore, with next-generation sequencing data, it is unknown whether PCs should be constructed based on CVs, LFVs, or RVs. In this study, we used the 1000 Genomes Project sequence data to explore the construction of PCs and their use in association analysis of LFVs or RVs for unrelated samples. It is shown that a few top PCs based on either CVs or LFVs could separate two continental groups, European and African samples, but those based on only RVs performed less well. When applied to several association tests in simulated data with population stratification, using PCs based on either CVs or LFVs was effective in controlling Type I error rates, while nonadjustment led to inflated Type I error rates. Perhaps the most interesting observation is that, although the PCs based on LFVs could better separate the two continental groups than those based on CVs, the use of the former could lead to overadjustment in the sense of substantial power loss in the absence of population stratification; in contrast, we did not see any problem with the use of the PCs based on CVs in all our examples.

  19. The analysis of mixtures: Application of principal component analysis to XAS spectra

    SciTech Connect

    Wasserman, S.R.

    1996-10-01

    Many samples which are subjected to XAS analysis contain the element of interest in more than one chemical form. The interpretation of the spectras from such samples is often not straightforward, particularly if appropriate model systems are not available. We have applied principal component analysis (PCA) to real and simulated systems which contain mixtures of several species for a given element PCA has been extensively used for the analysis of other types of spectra, including MS, IR and UV-VIS. The application of PCA to XAS is illustrated by examining the speciation of iron within coals. PCA can determine how many different species that contain a particular element are present in a series of spectra. In tandem with model compounds, principal component analysis can suggest which of the models may contribute to the observed XAS spectra.

  20. Overview of independent component analysis technique with an application to synthetic aperture radar (SAR) imagery processing.

    PubMed

    Fiori, Simone

    2003-01-01

    We present an overview of independent component analysis, an emerging signal processing technique based on neural networks, with the aim to provide an up-to-date survey of the theoretical streams in this discipline and of the current applications in the engineering area. We also focus on a particular application, dealing with a remote sensing technique based on synthetic aperture radar imagery processing: we briefly review the features and main applications of synthetic aperture radar and show how blind signal processing by neural networks may be advantageously employed to enhance the quality of remote sensing data.

  1. Asynchronous in situ connected-components analysis for complex fluid flows

    SciTech Connect

    Mcclure, James; Berrill, Mark A; Prins, Jan F; Miller, Cass

    2016-01-01

    Fluid flow in porous media is at the heart of problems such as groundwater contamination and carbon sequestration, and presents an important challenge for scientific computing. For this class of problem, large three-dimensional simulations are performed to advance scientific inquiry. On massively parallel computing systems, the volume of data generated by such approaches can become a productivity bottleneck if the raw data generated from the simulation is analyzed in a post-processing step. We present a physics-based framework for in situ data reduction that is theoretically grounded in multiscale averaging theory. We show how task parallelism can be exploited to concurrently perform a variety of analysis tasks with data-dependent costs, including the generation of iso-surfaces, morphological analyses, and connected components analysis. A task management framework is constructed to launch asynchronous analysis threads, manage dependencies between different tasks, promote data locality and hide the impact of data transfers. The framework is applied to analyze GPU-based simulations of two-fluid flow in porous media, generating a set of averaged measures that represents the overall system behavior. We demonstrate how the approach can be applied to perform physically-consistent averaging over fluid subregions using connected components analysis. Simulations performed on Oak Ridge National Lab s Titan supercomputer are profiled to demonstrate the performance of the associated multi-threaded in situ analysis for typical production simulation of two-fluid flow.

  2. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  3. Computational models for the analysis/design of hypersonic scramjet components. I - Combustor and nozzle models

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.

    1986-01-01

    An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.

  4. Estimation of blood pressure variability using independent component analysis of photoplethysmographic signal.

    PubMed

    Abe, Makoto; Yoshizawa, Makoto; Sugita, Norihiro; Tanaka, Akira; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi

    2009-01-01

    The maximum cross-correlation coefficient rho(max) between blood pressure variability and heart rate variability, whose frequency components are limited to the Mayer wave-related band, is a useful index to evaluate the state of the autonomic nervous function related to baroreflex. However, measurement of continuous blood pressure with an expensive and bulky measuring device is required to calculate rho(max). The present study has proposed an easier method for obtaining rho(max) with measurement of finger photoplethysmography (PPG). In the proposed method, independent components are extracted from feature variables specified by the PPG signal by using the independent component analysis (ICA), and then the most appropriate component is chosen out of them so that the rho(max) based on the component can fit its true value. The results from the experiment with a postural change performed in 17 healthy subjects suggested that the proposed method is available for estimating rho(max) by using the ICA to extract blood pressure information from the PPG signal.

  5. A multi-dimensional functional principal components analysis of EEG data.

    PubMed

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-01-10

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations.

  6. Short-wave near-infrared spectroscopy of milk powder for brand identification and component analysis.

    PubMed

    Wu, D; Feng, S; He, Y

    2008-03-01

    The aim of the present paper was to provide new insight into the short-wave near-infrared (NIR) spectroscopic analysis of milk powder. Near-infrared spectra in the 800- to 1,025-nm region of 350 samples were analyzed to determine the brands and quality of milk powders. Brand identification was done by a least squares support vector machine (LS-SVM) model coupled with fast fixed-point independent component analysis (ICA). The correct answer rate of the ICA-LS-SVM model reached as high as 98%, which was better than that of the LS-SVM (95%). Contents of fat, protein, and carbohydrate were determined by the LS-SVM and ICA-LS-SVM models. Both processes offered good determination performance for analyzing the main components in milk powder based on short-wave NIR spectra. The coefficients of determination for prediction and root mean square error of prediction of ICA-LS-SVM were 0.983, 0.231, and 0.982, and 0.161, 0.980, and 0.410, respectively, for the 3 components. However, there were less than 10 input variables in the ICA-LS-SVM model compared with 225 in the LS-SVM model. Thus, the processing time was much shorter and the model was simpler. The results presented in this paper demonstrate that the short-wave NIR region is promising for fast and reliable determination of the brand and main components in milk powder.

  7. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  8. A Content Analysis of Preconception Health Education Materials: Characteristics, Strategies, and Clinical-Behavioral Components

    PubMed Central

    Levis, Denise M.; Westbrook, Kyresa

    2015-01-01

    Purpose Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. Design We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Setting Not applicable. Participants Not applicable. Method Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials’ characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). Results The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). “Practicing PCH benefits the baby’s health” was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Conclusion Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials. PMID:23286661

  9. Differentially Variable Component Analysis (dVCA): Identifying Multiple Evoked Components using Trial-to-Trial Variability

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.

    2003-01-01

    Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.

  10. A four-component organogel based on orthogonal chemical interactions.

    PubMed

    Luisier, Nicolas; Schenk, Kurt; Severin, Kay

    2014-09-14

    A thermoresponsive organogel was obtained by orthogonal assembly of four compounds using dynamic covalent boronate ester and imine bonds, as well as dative boron-nitrogen bonds. It is shown that the gel state can be disrupted or reinforced by chemicals which undergo exchange reactions with the gel components.

  11. Robust analysis of event-related functional magnetic resonance imaging data using independent component analysis

    NASA Astrophysics Data System (ADS)

    Kadah, Yasser M.

    2002-04-01

    We propose a technique that enables robust use of blind source separation techniques in fMRI data analysis. The fMRI temporal signal is modeled as the summation of the true activation signal, a physiological baseline fluctuation component, and a random noise component. A preprocessing denoising is used to reduce the dimensionality of the random noise component in this mixture before applying the principal/independent component analysis (PCA/ICA) methods. The set of denoised time courses from a localized region are utilized to capture the region-specific activation patterns. We show a significant improvement in the convergence properties of the ICA iteration when the denoised time courses are used. We also demonstrate the advantage of using ICA over PCA to separate components due to physiological signals from those corresponding to actual activation. Moreover, we propose the use of ICA to analyze the magnitude of the Fourier domain of the time courses. This allows ICA to group signals with similar patterns and different delays together, which makes the iteration even more efficient. The proposed technique is verified using computer simulations as well as actual data from a healthy human volunteer. The results confirm the robustness of the new strategy and demonstrate its value for clinical use.

  12. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  13. Technological Alternatives to Paper-Based Components of Team-Based Learning

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Walker, Joshua D.

    2008-01-01

    The authors have been using components of team-based learning (TBL) in two undergraduate courses at the University of Texas for several years: an educational psychology survey course--Cognition, Human Learning and Motivation--and Introduction to Statistics. In this chapter, they describe how they used technology in classes of fifty to seventy…

  14. What's hampering measurement invariance: detecting non-invariant items using clusterwise simultaneous component analysis

    PubMed Central

    De Roover, Kim; Timmerman, Marieke E.; De Leersnyder, Jozefien; Mesquita, Batja; Ceulemans, Eva

    2014-01-01

    The issue of measurement invariance is ubiquitous in the behavioral sciences nowadays as more and more studies yield multivariate multigroup data. When measurement invariance cannot be established across groups, this is often due to different loadings on only a few items. Within the multigroup CFA framework, methods have been proposed to trace such non-invariant items, but these methods have some disadvantages in that they require researchers to run a multitude of analyses and in that they imply assumptions that are often questionable. In this paper, we propose an alternative strategy which builds on clusterwise simultaneous component analysis (SCA). Clusterwise SCA, being an exploratory technique, assigns the groups under study to a few clusters based on differences and similarities in the component structure of the items, and thus based on the covariance matrices. Non-invariant items can then be traced by comparing the cluster-specific component loadings via congruence coefficients, which is far more parsimonious than comparing the component structure of all separate groups. In this paper we present a heuristic for this procedure. Afterwards, one can return to the multigroup CFA framework and check whether removing the non-invariant items or removing some of the equality restrictions for these items, yields satisfactory invariance test results. An empirical application concerning cross-cultural emotion data is used to demonstrate that this novel approach is useful and can co-exist with the traditional CFA approaches. PMID:24999335

  15. GPR detection of buried symmetrically shaped minelike objects using selective independent component analysis

    NASA Astrophysics Data System (ADS)

    Karlsen, Brian; Sorensen, Helge B. D.; Larsen, Jan; Jakobsen, Kaj B.

    2003-09-01

    This paper addresses the detection of mine-like objects in stepped-frequency ground penetrating radar (SF-GPR) data as a function of object size, object content, and burial depth. The detection approach is based on a Selective Independent Component Analysis (SICA). SICA provides an automatic ranking of components, which enables the suppression of clutter, hence extraction of components carrying mine information. The goal of the investigation is to evaluate various time and frequency domain ICA approaches based on SICA. The performance comparison is based on a series of mine-like objects ranging from small-scale anti-personal (AP) mines to large-scale anti-tank (AT) mines. Large-scale SF-GPR measurements on this series of mine-like objects buried in soil were performed. The SF-GPR data was acquired using a wideband monostatic bow-tie antenna operating in the frequency range 750 MHz - 3.0 GHz. The detection and clutter reduction approaches based on SICA are successfully evaluated on this SF-GPR dataset.

  16. Magnetic unmixing of first-order reversal curve diagrams using principal component analysis

    NASA Astrophysics Data System (ADS)

    Lascu, Ioan; Harrison, Richard J.; Li, Yuting; Muraszko, Joy R.; Channell, James E. T.; Piotrowski, Alexander M.; Hodell, David A.

    2015-09-01

    We describe a quantitative magnetic unmixing method based on principal component analysis (PCA) of first-order reversal curve (FORC) diagrams. For PCA, we resample FORC distributions on grids that capture diagnostic signatures of single-domain (SD), pseudosingle-domain (PSD), and multidomain (MD) magnetite, as well as of minerals such as hematite. Individual FORC diagrams are recast as linear combinations of end-member (EM) FORC diagrams, located at user-defined positions in PCA space. The EM selection is guided by constraints derived from physical modeling and imposed by data scatter. We investigate temporal variations of two EMs in bulk North Atlantic sediment cores collected from the Rockall Trough and the Iberian Continental Margin. Sediments from each site contain a mixture of magnetosomes and granulometrically distinct detrital magnetite. We also quantify the spatial variation of three EM components (a coarse silt-sized MD component, a fine silt-sized PSD component, and a mixed clay-sized component containing both SD magnetite and hematite) in surficial sediments along the flow path of the North Atlantic Deep Water (NADW). These samples were separated into granulometric fractions, which helped constrain EM definition. PCA-based unmixing reveals systematic variations in EM relative abundance as a function of distance along NADW flow. Finally, we apply PCA to the combined data set of Rockall Trough and NADW sediments, which can be recast as a four-EM mixture, providing enhanced discrimination between components. Our method forms the foundation of a general solution to the problem of unmixing multicomponent magnetic mixtures, a fundamental task of rock magnetic studies.

  17. [Infrared spectroscopy analysis of SF6 using multiscale weighted principal component analysis].

    PubMed

    Peng, Xi; Wang, Xian-Pei; Huang, Yun-Guang

    2012-06-01

    Infrared spectroscopy analysis of SF6 and its derivative is an important method for operating state assessment and fault diagnosis of the gas insulated switchgear (GIS). Traditional methods are complicated and inefficient, and the results can vary with different subjects. In the present work, the feature extraction methods in machine learning are recommended to solve such diagnosis problem, and a multiscale weighted principal component analysis method is proposed. The proposed method combines the advantage of standard principal component analysis and multiscale decomposition to maximize the feature information in different scales, and modifies the importance of the eigenvectors in classification. The classification performance of the proposed method was demonstrated to be 3 to 4 times better than that of the standard PCA for the infrared spectra of SF6 and its derivative provided by Guangxi Research Institute of Electric Power.

  18. [Assessment of landscape ecological security and optimization of landscape pattern based on spatial principal component analysis and resistance model in arid inland area: A case study of Ganzhou District, Zhangye City, Northwest China].

    PubMed

    Pan, Jing-hu; Liu, Xiao

    2015-10-01

    Starting from ecological environment of inland river in arid area, the distribution of ecological security pattern of Ganzhou District was obtained by using the theory of landscape ecology, spatial principal component analysis (SPCA) and GIS techniques. Ten factors such as altitude, slope, soil erosion, vegetation coverage, and distance from road, were selected as the constraint conditions. According to the minimum cumulative resistance (MCR) model of landscape, the ecological corridor and nodes were established to optimize the structure and function of ecological function network. The results showed that the comprehensive ecological security situation of the research area was on the average. Area of moderate level of security was 1318.7 km2, being the largest and accounting for 36.7% of the research area. The area of low level of security was mainly located in the northern part and accounted for 19.9% of the study area. With points, lines and surfaces being interlaced, a regional ecological network was constructed, which was consisted of six ecological corridor, 14 ecological nodes, a large ecological source region and a plurality of small area source region, and could effectively improve ecological security level of the study area.

  19. [Experimental Conditions and Reliability Analysis of Results of COD components].

    PubMed

    Li, Zhi-hua; Zhang, Yin; Han, Xing; Yu, Ke; Li, Ru-jia

    2015-10-01

    The present study attempts to use SF( OUR(max)/OUR(en)) instead of S(0)/X(0) as an index of optimal initial conditions for determination of COD components by means of respirometry, thereby simplifying the measuring process and the operation can be automated. Further, the ratio of COD consumed by the growth of biomass can be used for the reliability assessment of results. Experimental results show that, experimental conditions for obtaining good results as follows: (1) for samples that composed of a large amount of easily biodegradable components (e. g., synthetic wastewater made by sodium acetate), SF should be in the range of 2.8 to 5.3, and the ratio of COD consumed by growth of biomass should be less than 30%; (2) for samples that composed of both readily biodegradable and slowly biodegradable components (i. e., typical domestic wastewater), SF should be in the range of 5.8 to 6.4, and the ratio of COD consumed by growth of biomass should be less than 30%; (3) and for samples that composed of a large amount of slowly biodegradable industrial wastewater (i. e., landfill leachate), SF should be 15 or less, and the ratio of COD consumed by growth of biomass should be approximately 40%. Therefore, when respirometry is used for the determination of COD components, the optimal conditions in terms of SF increase with the complexity of carbon source.

  20. A Component Analysis of Schedule Thinning during Functional Communication Training

    ERIC Educational Resources Information Center

    Betz, Alison M.; Fisher, Wayne W.; Roane, Henry S.; Mintz, Joslyn C.; Owen, Todd M.

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can…

  1. Respiratory dose analysis for components of ambient particulate matter

    EPA Science Inventory

    Particulate matter (PM) in the atmosphere is a complex mixture of particles with different sizes and chemical compositions. Although PM is known to induce health effects, specific attributes of PM that may cause health effects are somewhat ambiguous. Dose of each specific compone...

  2. A Critical Analysis of Football Bowl Subdivision Coaching Contract Components

    ERIC Educational Resources Information Center

    Nichols, Justin Keith

    2012-01-01

    This exploratory study is designed to inventory and analyze contract components used by Football Bowl Subdivision (FBS) institutions in the National Collegiate Athletic Association (NCAA) to further contribute to the body research. The FBS is comprised of 120 institutions and 94 of those institutions submitted contracts to "USA Today"…

  3. An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Fava, Joseph L.

    1987-01-01

    Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…

  4. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  5. A Component-based Programming Model for Composite, Distributed Applications

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  6. Laser-based rapid prototyping of plasmonic components

    NASA Astrophysics Data System (ADS)

    Reinhardt, Carsten; Passinger, Sven; Kiyan, Roman; Stepanov, Andrey L.; Chichkov, Boris N.

    2006-08-01

    The science of surface plasmon polaritons (SPPs) has attracted a lot of attention in the last years. In this contribution, we study applications of two-photon absorption of femtosecond laser radiation for the fabrication of dielectric and metallic SPP-structures, which can be used for localization, guiding, and manipulation of SPPs. Dielectric SPP components, e.g. waveguides, bends and splitters are fabricated on gold films. SPP properties are investigated by scanning optical near-field microscopy (SNOM), indicating guiding and reflection of SPPs by polymer lines. SPP excitation on dielectric line and point structures is observed by far-field microscopy. Results on plasmon focussing and on the fabrication and characterization of metallic SPP-structures and components on dielectric substrates will be presented and discussed.

  7. Effect of Principal Component Analysis Centering and Scaling on Classification of Mycobacteria from Raman Spectra.

    PubMed

    Hanson, Cynthia; Sieverts, Michael; Vargis, Elizabeth

    2016-11-25

    Raman spectroscopy has been used for decades to detect and identify biological substances as it provides specific molecular information. Spectra collected from biological samples are often complex, requiring the aid of data truncation techniques such as principal component analysis (PCA) and multivariate classification methods. Classification results depend on the proper selection of principal components (PCs) and how PCA is performed (scaling and/or centering). There are also guidelines for choosing the optimal number of PCs such as a scree plot, Kaiser criterion, or cumulative percent variance. The goal of this research is to evaluate these methods for best implementation of PCA and PC selection to classify Raman spectra of bacteria. Raman spectra of three different isolates of mycobacteria (Mycobacterium sp. JLS, Mycobacterium sp. KMS, Mycobacterium sp. MCS) were collected and then passed through PCA and linear discriminant analysis for classification. Principal component analysis implementation as well as PC selection was evaluated by comparing the highest possible classification accuracies against accuracies determined by PC selection methods for each centering and scaling option. Centered and unscaled data provided the best results when selecting PCs based on cumulative percent variance.

  8. Automatic Classification of Staphylococci by Principal-Component Analysis and a Gradient Method1

    PubMed Central

    Hill, L. R.; Silvestri, L. G.; Ihm, P.; Farchi, G.; Lanciani, P.

    1965-01-01

    Hill, L. R. (Università Statale, Milano, Italy), L. G. Silvestri, P. Ihm, G. Farchi, and P. Lanciani. Automatic classification of staphylococci by principal-component analysis and a gradient method. J. Bacteriol. 89:1393–1401. 1965.—Forty-nine strains from the species Staphylococcus aureus, S. saprophyticus, S. lactis, S. afermentans, and S. roseus were submitted to different taxometric analyses; clustering was performed by single linkage, by the unweighted pair group method, and by principal-component analysis followed by a gradient method. Results were substantially the same with all methods. All S. aureus clustered together, sharply separated from S. roseus and S. afermentans; S. lactis and S. saprophyticus fell between, with the latter nearer to S. aureus. The main purpose of this study was to introduce a new taxometric technique, based on principal-component analysis followed by a gradient method, and to compare it with some other methods in current use. Advantages of the new method are complete automation and therefore greater objectivity, execution of the clustering in a space of reduced dimensions in which different characters have different weights, easy recognition of taxonomically important characters, and opportunity for representing clusters in three-dimensional models; the principal disadvantage is the need for large computer facilities. Images PMID:14293013

  9. Condition Based Monitoring of Gas Turbine Combustion Components

    SciTech Connect

    Ulerich, Nancy; Kidane, Getnet; Spiegelberg, Christine; Tevs, Nikolai

    2012-09-30

    The objective of this program is to develop sensors that allow condition based monitoring of critical combustion parts of gas turbines. Siemens teamed with innovative, small companies that were developing sensor concepts that could monitor wearing and cracking of hot turbine parts. A magnetic crack monitoring sensor concept developed by JENTEK Sensors, Inc. was evaluated in laboratory tests. Designs for engine application were evaluated. The inability to develop a robust lead wire to transmit the signal long distances resulted in a discontinuation of this concept. An optical wear sensor concept proposed by K Sciences GP, LLC was tested in proof-of concept testing. The sensor concept depended, however, on optical fiber tips wearing with the loaded part. The fiber tip wear resulted in too much optical input variability; the sensor could not provide adequate stability for measurement. Siemens developed an alternative optical wear sensor approach that used a commercial PHILTEC, Inc. optical gap sensor with an optical spacer to remove fibers from the wearing surface. The gap sensor measured the length of the wearing spacer to follow loaded part wear. This optical wear sensor was developed to a Technology Readiness Level (TRL) of 5. It was validated in lab tests and installed on a floating transition seal in an F-Class gas turbine. Laboratory tests indicate that the concept can measure wear on loaded parts at temperatures up to 800{degrees}C with uncertainty of < 0.3 mm. Testing in an F-Class engine installation showed that the optical spacer wore with the wearing part. The electro-optics box located outside the engine enclosure survived the engine enclosure environment. The fiber optic cable and the optical spacer, however, both degraded after about 100 operating hours, impacting the signal analysis.

  10. Scalable Advanced Network Services Based on Coordinated Active Components

    DTIC Science & Technology

    2004-02-01

    as a means of customizing both high functionality and scalable communication components to meet the needs of specific services. • A service...considering both the service quality for the user and the efficient use of the infrastructure (cost). ( 4 ) Finally, the synthesizer needs to configure the...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing

  11. Detecting Gen