Science.gov

Sample records for component analysis based

  1. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  2. Enhancement of textural differences based on morphological component analysis.

    PubMed

    Chi, Jianning; Eramian, Mark

    2015-09-01

    This paper proposes a new texture enhancement method which uses an image decomposition that allows different visual characteristics of textures to be represented by separate components in contrast with previous methods which either enhance texture indirectly or represent all texture information using a single image component. Our method is intended to be used as a preprocessing step prior to the use of texture-based image segmentation algorithms. Our method uses a modification of morphological component analysis (MCA) which allows texture to be separated into multiple morphological components each representing a different visual characteristic of texture. We select four such texture characteristics and propose new dictionaries to extract these components using MCA. We then propose procedures for modifying each texture component and recombining them to produce a texture-enhanced image. We applied our method as a preprocessing step prior to a number of texture-based segmentation methods and compared the accuracy of the results, finding that our method produced results superior to comparator methods for all segmentation algorithms tested. We also demonstrate by example the main mechanism by which our method produces superior results, namely that it causes the clusters of local texture features of each distinct image texture to mutually diverge within the multidimensional feature space to a vastly superior degree versus the comparator enhancement methods. PMID:25935032

  3. Random phase-shifting interferometry based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Lu, Xiaoxu; Tian, Jindong; Shou, Junwei; Zheng, Dejin; Zhong, Liyun

    2016-07-01

    In random phase-shifting interferometry, a novel phase retrieval algorithm is proposed based on the independent component analysis (ICA). By performing the recombination of pixel position, a sequence of phase-shifting interferograms with random phase shifts are decomposed into a group of mutual independent components, and then the background and the measured phase of interferogram can obtained with a simple arctangent operation. Compared with the conventional advanced iterative algorithm (AIA) with high accuracy, both the simulation and the experimental results demonstrate that the proposed ICA algorithm reveals high accuracy, rapid convergence, and good noise-tolerance in random phase-shifting interferometry.

  4. Filterbank-based independent component analysis for acoustic mixtures

    NASA Astrophysics Data System (ADS)

    Park, Hyung-Min

    2011-06-01

    Independent component analysis (ICA) for acoustic mixtures has been a challenging problem due to very complex reverberation involved in real-world mixing environments. In an effort to overcome disadvantages of the conventional time domain and frequency domain approaches, this paper describes filterbank-based independent component analysis for acoustic mixtures. In this approach, input signals are split into subband signals and decimated. A simplified network performs ICA on the decimated signals, and finally independent components are synthesized. First, a uniform filterbank is employed in the approach for basic and simple derivation and implementation. The uniform-filterbank-based approach achieves better separation performance than the frequency domain approach and gives faster convergence speed with less computational complexity than the time domain approach. Since most of natural signals have exponentially or more steeply decreasing energy as the frequency increases, the spectral characteristics of natural signals introduce a Bark-scale filterbank which divides low frequency region minutely and high frequency region widely. The Bark-scale-filterbank-based approach shows faster convergence speed than the uniform-filterbank-based one because it has more whitened inputs in low frequency subbands. It also improves separation performance as it has enough data to train adaptive parameters exactly in high frequency subbands.

  5. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  6. Biological agent detection based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Mudigonda, Naga R.; Kacelenga, Ray

    2006-05-01

    This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.

  7. Life Assessment of Steam Turbine Components Based on Viscoplastic Analysis

    NASA Astrophysics Data System (ADS)

    Choi, Woo-Sung; Fleury, Eric; Kim, Bum-Shin; Hyun, Jung-Seob

    Unsteady thermal and mechanical loading in turbine components is caused due to the transient regimes arising during start-ups and shut-downs and due to changes in the operating regime in steam power plants; this results in nonuniform strain and stress distribution. Thus, an accurate knowledge of the stresses caused by various loading conditions is required to ensure the integrity and to ensure an accurate life assessment of the components of a turbine. Although the materials of the components of the steam turbine deform inelastically at a high temperature, currently, only elastic calculations are performed for safety and simplicity. Numerous models have been proposed to describe the viscoplastic (time-dependent) behavior; these models are rather elaborate and it is difficult to incorporate them into a finite element code in order to simulate the loading of complex structures. In this paper, the total lifetime of the components of a steam turbine was calculated by combining the viscoplastic constitutive equation with the ABAQUS finite element code. Viscoplastic analysis was conducted by focusing mainly on simplified constitutive equations with linear kinematic hardening, which is simple enough to be used effectively in computer simulation. The von Mises stress distribution of an HIP turbine rotor was calculated during the cold start-up operation of the rotor, and a reasonable number of cycles were obtained from the equation of Langer.

  8. Principal component analysis based methodology to distinguish protein SERS spectra

    NASA Astrophysics Data System (ADS)

    Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.

    2011-05-01

    Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.

  9. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  10. Experimental analysis of Model-Based Roentgen Stereophotogrammetric Analysis (MBRSA) on four typical prosthesis components.

    PubMed

    Seehaus, Frank; Emmerich, Judith; Kaptein, Bart L; Windhagen, Henning; Hurschler, Christof

    2009-04-01

    Classical marker-based roentgen stereophotogrammetric analysis (RSA) is an accurate method of measuring in vivo implant migration. A disadvantage of the method is the necessity of placing tantalum markers on the implant, which constitutes additional manufacturing and certification effort. Model-based RSA (MBRSA) is a method by which pose-estimation of geometric surface-models of the implant is used to detect implant migration. The placement of prosthesis markers is thus no longer necessary. The accuracy of the pose-estimation algorithms used depends on the geometry of the prosthesis as well as the accuracy of the surface models used. The goal of this study was thus to evaluate the experimental accuracy and precision of the MBRSA method for four different, but typical prosthesis geometries, that are commonly implanted. Is there a relationship existing between the accuracy of MBRSA and prosthesis geometries? Four different prosthesis geometries were investigated: one femoral and one tibial total knee arthroplasty (TKA) component and two different femoral stem total hip arthroplasty (THA) components. An experimental phantom model was used to simulate two different implant migration protocols, whereby the implant was moved relative to the surrounding bone (relative prosthesis-bone motion (RM)), or, similar to the double-repeated measures performed to assess accuracy clinically, both the prosthesis and the surrounding bone model (zero relative prosthesis-bone motion (ZRM)) were moved. Motions were performed about three translational and three rotational axes, respectively. The maximum 95% confidence interval (CI) for MBRSA of all four prosthesis investigated was better than -0.034 to 0.107 mm for in-plane and -0.217 to 0.069 mm for out-of-plane translation, and from -0.038 deg to 0.162 deg for in-plane and from -1.316 deg to 0.071 deg for out-of-plane rotation, with no clear differences between the ZRM and RM protocols observed. Accuracy in translation was similar

  11. Gabor feature-based apple quality inspection using kernel principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Automated inspection of apple quality involves computer recognition of good apples and blemished apples based on geometric or statistical features derived from apple images. This paper introduces a Gabor feature-based kernel, principal component analysis (PCA) method; by combining Gabor wavelet rep...

  12. Robust Adaptive Principal Component Analysis Based on Intergraph Matrix for Medical Image Registration

    PubMed Central

    Xiao, Jinjun; Li, Min; Zhang, Haipeng

    2015-01-01

    This paper proposes a novel robust adaptive principal component analysis (RAPCA) method based on intergraph matrix for image registration in order to improve robustness and real-time performance. The contributions can be divided into three parts. Firstly, a novel RAPCA method is developed to capture the common structure patterns based on intergraph matrix of the objects. Secondly, the robust similarity measure is proposed based on adaptive principal component. Finally, the robust registration algorithm is derived based on the RAPCA. The experimental results show that the proposed method is very effective in capturing the common structure patterns for image registration on real-world images. PMID:25960739

  13. Reduction of a collisional-radiative mechanism for argon plasma based on principal component analysis

    SciTech Connect

    Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.

    2015-06-15

    This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.

  14. Impact factor analysis of mixture spectra unmixing based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Shengbo; Guo, Xulin; Zhou, Chao

    2016-01-01

    Based on spectral independence of different materials, independent component analysis (ICA), a blind source separation technique, can be applied to separate mixed hyperspectral signals. For the purpose of detecting objects on the sea and improving the precision of target recognition, an original ICA method is applied by analyzing the influence exerted by spectral features of different materials and mixture materials on spectral unmixing results. Due to the complexity of targets on the sea, several measured spectra of different materials have been mixed with water spectra to simulate mixed spectra for mixture spectra decomposition. Synthetic mixed spectra are generated by linear combinations of different materials and water spectra to obtain separated results. We then compared the separated results with the measured spectra of each endmember by coefficient of determination. We conclude that these factors that will change the original spectral characteristics of Gaussian distribution have significant influence on the separated results and selecting a proper initial matrix, and processing spectral data with lower noise can help improve the ICA method for more accurate separated results from hyperspectral data.

  15. Dependent component analysis based approach to robust demarcation of skin tumors

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2009-02-01

    Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.

  16. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  17. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. PMID:26917856

  18. Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study

    NASA Astrophysics Data System (ADS)

    Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas

    2009-02-01

    We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.

  19. Image-based pupil plane characterization via principal component analysis for EUVL tools

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-03-01

    We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.

  20. Principal component analysis based carrier removal approach for Fourier transform profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao

    2015-05-01

    To handle the issue of the nonlinear carrier phase due to the divergent illumination commonly adopted in the fringe projection measurement, we propose a principal component analysis (PCA) based carrier removal method for Fourier transform profilometry. By PCA, the method can decompose the nonlinear carrier phase map into several principal components, where the phase of the carrier can be extracted from the first dominant component acquired. It is effective and requires less human intervention since no data points need to be collected from the reference plane in advance compared with traditional methods. Further, the influence of the lens distortion is considered thus the carrier can be determined more accurately. Our experiment shows the validity of the proposed approach.

  1. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  2. Extracting the core indicators of pulverized coal for blast furnace injection based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Hong-wei; Su, Bu-xin; Zhang, Jian-liang; Zhu, Meng-yi; Chang, Jian

    2013-03-01

    An updated approach to refining the core indicators of pulverized coal used for blast furnace injection based on principal component analysis is proposed in view of the disadvantages of the existing performance indicator system of pulverized coal used in blast furnaces. This presented method takes into account all the performance indicators of pulverized coal injection, including calorific value, igniting point, combustibility, reactivity, flowability, grindability, etc. Four core indicators of pulverized coal injection are selected and studied by using principal component analysis, namely, comprehensive combustibility, comprehensive reactivity, comprehensive flowability, and comprehensive grindability. The newly established core index system is not only beneficial to narrowing down current evaluation indices but also effective to avoid previous overlapping problems among indicators by mutually independent index design. Furthermore, a comprehensive property indicator is introduced on the basis of the four core indicators, and the injection properties of pulverized coal can be overall evaluated.

  3. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  4. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  5. Inverting geodetic time series with a principal component analysis-based inversion method

    NASA Astrophysics Data System (ADS)

    Kositsky, A. P.; Avouac, J.-P.

    2010-03-01

    The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.

  6. Modeling PCB dechlorination in aquatic sediments by principal component based factor analysis and positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Christensen, E. R.; Bzdusek, P. A.

    2003-04-01

    Anaerobic PCB dechlorination in aquatic sediments is a naturally occurring process that reduces the dioxin-like PCB toxicity. The PCB biphenyl structure is kept intact but the number of substituted chlorine atoms is reduced, primarily from the para and meta positions. Flanked para and meta chlorine dechlorination, as in process H/H', appears to be more common in-situ than flanked and unflanked para, and meta dechlorination as in process Q. Aroclors that are susceptible to these reactions include 1242, 1248, 1254, and 1260. These dechlorination reactions have recently been modeled by a least squares method for Ashtabula River, Ohio, and Fox River, Wisconsin sediments. Prior to modeling the dechlorination reactions for an ecosystem it is desirable to generate overall PCB source functions. One method to determine source functions is to use loading matrices of a factor analytical model. We have developed such models based both on a principal component approach including nonnegative oblique rotations, and positive matrix factorization (PMF). While the principal component method first requires an eigenvalue analysis of a covariance matrix, the PMF method is based on a direct least squares analysis considering simultaneously the loading and score matrices. Loading matrices obtained from the PMF method are somewhat sensitive to the initial guess of source functions. Preliminary work indicates that a hybrid approach considering first principal components and then PMF may offer an optimum solution. The relationship of PMF to conventional chemical mass balance modeling with or without some prior knowledge of source functions is also discussed.

  7. A novel concealed information test method based on independent component analysis and support vector machine.

    PubMed

    Gao, Junfeng; Lu, Liang; Yang, Yong; Yu, Gang; Na, Liantao; Rao, NiNi

    2012-01-01

    The concealed information test (CIT) has drawn much attention and has been widely investigated in recent years. In this study, a novel CIT method based on denoised P3 and machine learning was proposed to improve the accuracy of lie detection. Thirty participants were chosen as the guilty and innocent participants to perform the paradigms of 3 types of stimuli. The electroencephalogram (EEG) signals were recorded and separated into many single trials. In order to enhance the signal noise ratio (SNR) of P3 components, the independent component analysis (ICA) method was adopted to separate non-P3 components (i.e., artifacts) from every single trial. In order to automatically identify the P3 independent components (ICs), a new method based on topography template was proposed to automatically identify the P3 ICs. Then the P3 waveforms with high SNR were reconstructed on Pz electrodes. Second, the 3 groups of features based on time,frequency, and wavelets were extracted from the reconstructed P3 waveforms. Finally, 2 classes of feature samples were used to train a support vector machine (SVM) classifier because it has higher performance compared with several other classifiers. Meanwhile, the optimal number of P3 ICs and some other parameter values in the classifiers were determined by the cross-validation procedures. The presented method achieved a balance test accuracy of 84.29% on detecting P3 components for the guilty and innocent participants. The presented method improves the efficiency of CIT in comparison with previous reported methods. PMID:22423552

  8. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Wang, Xinyu; Zhao, Lin; Feng, Ruyi; Zhang, Liangpei; Xu, Yanyan

    2016-09-01

    Recently, many blind source separation (BSS)-based techniques have been applied to hyperspectral unmixing. In this paper, a new blind spectral unmixing method based on sparse component analysis (BSUSCA) is proposed to solve the problem of highly mixed data. The BSUSCA algorithm consists of an alternative scheme based on two-block alternating optimization, by which we can simultaneously obtain the endmember signatures and their corresponding fractional abundances. According to the spatial distribution of the endmembers, the sparse properties of the fractional abundances are considered in the proposed algorithm. A sparse component analysis (SCA)-based mixing matrix estimation method is applied to update the endmember signatures, and the abundance estimation problem is solved by the alternating direction method of multipliers (ADMM). SCA is utilized for the unmixing due to its various advantages, including the unique solution and robust modeling assumption. The robustness of the proposed algorithm is verified through simulated experimental study. The experimental results using both simulated data and real hyperspectral remote sensing images confirm the high efficiency and precision of the proposed algorithm.

  9. [Component analysis of complex mixed solution based on multidimensional diffuse reflectance spectroscopy].

    PubMed

    Li, Gang; Xiong, Chan; Zhao, Li-ying; Lin, Ling; Tong, Ying; Zhang, Bao-ju

    2012-02-01

    In the present paper, the authors proposed a method for component analysis of complex mixed solutions based on multidimensional diffuse reflectance spectroscopy by analyzing the information carried by spectrum signals from various optical properties of various components of the analyte. The experiment instrument was designed with supercontinuum laser source, the motorized precision translation stage and the spectrometer. The Intralipid-20% was taken as an analyte, and was diluted over a range of 1%-20% in distilled water. The diffuse reflectance spectrum signal was measured at 24 points within the distance of 1.5-13 mm (at an interval of 0.5 mm) above the incidence point. The partial least squares algorithm model was used to perform a modeling and forecasting analysis for the spectral analysis data collected from single-point and multi-point. The results showed that the most accurate calibration model was created by the spectral data acquired from the nearest 1-13 points above the incident point; the most accurate prediction model was created by the spectral signal acquired from the nearest 1-7 points above the incident point. It was proved that multidimensional diffuse reflectance spectroscopy can improve the spectral signal to noise ratio. Compared with the traditional spectrum technology using a single optical property such as absorbance or reflectance, this method increased the impact of scattering characteristics of the analyte. So the use of a variety of optical properties of the analytes can make an improvement of the accuracy of the modeling and forecasting, and also provide a basis for component analysis of the complex mixed solution based on multidimensional diffuse reflectance spectroscopy. PMID:22512196

  10. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  11. Analysis of active components in Salvia miltiorrhiza injection based on vascular endothelial cell protection.

    PubMed

    Shen, Jie; Yang, Kai; Sun, Caihua; Zheng, Minxia

    2014-09-01

    Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI). HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties. PMID:25296678

  12. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    PubMed

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations. PMID:25628096

  13. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  14. Functional activity maps based on significance measures and Independent Component Analysis.

    PubMed

    Martínez-Murcia, F J; Górriz, J M; Ramírez, J; Puntonet, C G; Illán, I A

    2013-07-01

    The use of functional imaging has been proven very helpful for the process of diagnosis of neurodegenerative diseases, such as Alzheimer's Disease (AD). In many cases, the analysis of these images is performed by manual reorientation and visual interpretation. Therefore, new statistical techniques to perform a more quantitative analysis are needed. In this work, a new statistical approximation to the analysis of functional images, based on significance measures and Independent Component Analysis (ICA) is presented. After the images preprocessing, voxels that allow better separation of the two classes are extracted, using significance measures such as the Mann-Whitney-Wilcoxon U-Test (MWW) and Relative Entropy (RE). After this feature selection step, the voxels vector is modelled by means of ICA, extracting a few independent components which will be used as an input to the classifier. Naive Bayes and Support Vector Machine (SVM) classifiers are used in this work. The proposed system has been applied to two different databases. A 96-subjects Single Photon Emission Computed Tomography (SPECT) database from the "Virgen de las Nieves" Hospital in Granada, Spain, and a 196-subjects Positron Emission Tomography (PET) database from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Values of accuracy up to 96.9% and 91.3% for SPECT and PET databases are achieved by the proposed system, which has yielded many benefits over methods proposed on recent works. PMID:23660005

  15. Cardiac autonomic changes in middle-aged women: identification based on principal component analysis.

    PubMed

    Trevizani, Gabriela A; Nasario-Junior, Olivassé; Benchimol-Barbosa, Paulo R; Silva, Lilian P; Nadal, Jurandir

    2016-07-01

    The purpose of this study was to investigate the application of the principal component analysis (PCA) technique on power spectral density function (PSD) of consecutive normal RR intervals (iRR) aiming at assessing its ability to discriminate healthy women according to age groups: young group (20-25 year-old) and middle-aged group (40-60 year-old). Thirty healthy and non-smoking female volunteers were investigated (13 young [mean ± SD (median): 22·8 ± 0·9 years (23·0)] and 17 Middle-aged [51·7 ± 5·3 years (50·0)]). The iRR sequence was collected during ten minutes, breathing spontaneously, in supine position and in the morning, using a heart rate monitor. After selecting an iRR segment (5 min) with the smallest variance, an auto regressive model was used to estimate the PSD. Five principal component coefficients, extracted from PSD signals, were retained for analysis according to the Mahalanobis distance classifier. A threshold established by logistic regression allowed the separation of the groups with 100% specificity, 83·2% sensitivity and 93·3% total accuracy. The PCA appropriately classified two groups of women in relation to age (young and Middle-aged) based on PSD analysis of consecutive normal RR intervals. PMID:25532598

  16. A Conditional Entropy-Based Independent Component Analysis for Applications in Human Detection and Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Siana, Linda; Shou, Yu-Wen; Shen, Tzu-Kuei

    2010-12-01

    We present in this paper a modified independent component analysis (mICA) based on the conditional entropy to discriminate unsorted independent components. We make use of the conditional entropy to select an appropriate subset of the ICA features with superior capability in classification and apply support vector machine (SVM) to recognizing patterns of human and nonhuman. Moreover, we use the models of background images based on Gaussian mixture model (GMM) to handle images with complicated backgrounds. Also, the color-based shadow elimination and head models in ellipse shapes are combined to improve the performance of moving objects extraction and recognition in our system. Our proposed tracking mechanism monitors the movement of humans, animals, or vehicles within a surveillance area and keeps tracking the moving pedestrians by using the color information in HSV domain. Our tracking mechanism uses the Kalman filter to predict locations of moving objects for the conditions in lack of color information of detected objects. Finally, our experimental results show that our proposed approach can perform well for real-time applications in both indoor and outdoor environments.

  17. Forecasting of Air Quality Index in Delhi Using Neural Network Based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Kumar, Anikender; Goyal, P.

    2013-04-01

    Forecasting of the air quality index (AQI) is one of the topics of air quality research today as it is useful to assess the effects of air pollutants on human health in urban areas. It has been learned in the last decade that airborne pollution has been a serious and will be a major problem in Delhi in the next few years. The air quality index is a number, based on the comprehensive effect of concentrations of major air pollutants, used by Government agencies to characterize the quality of the air at different locations, which is also used for local and regional air quality management in many metro cities of the world. Thus, the main objective of the present study is to forecast the daily AQI through a neural network based on principal component analysis (PCA). The AQI of criteria air pollutants has been forecasted using the previous day's AQI and meteorological variables, which have been found to be nearly same for weekends and weekdays. The principal components of a neural network based on PCA (PCA-neural network) have been computed using a correlation matrix of input data. The evaluation of the PCA-neural network model has been made by comparing its results with the results of the neural network and observed values during 2000-2006 in four different seasons through statistical parameters, which reveal that the PCA-neural network is performing better than the neural network in all of the four seasons.

  18. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  19. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  20. The use of principal component and cluster analysis to differentiate banana peel flours based on their starch and dietary fibre components.

    PubMed

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-08-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193

  1. A component mode synthesis based hybrid method for the dynamic analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Roibás Millán, E.; Chimeno Manguán, M.; Simón Hidalgo, F.

    2015-11-01

    A hybrid method is presented for predicting the dynamic response of complex systems across a broad frequency range. In the mid-frequency range it is quite common to find a mixture of long wavelength motion, global modes, which spans several sub-structures, together with weakly phase correlated local motion, local modes, that is confined to individual sub-structures. In this work, the use of a Component Mode Synthesis allows us to relate Finite Element Method sub-structuring with the modes location within the different sub-structures defined in a Statistical Energy Analysis model. The method proposed here, the Hybrid Analysis based on Component Mode Synthesis sub-structuring (HA-CMS) method provides a greater flexibility defining the applicability range of each one of the calculation methods. Deterministic description of the global behaviour of the system is combined with a statistical description of the local one, taking into account the energy transfer between global and local scales. The application of the HA-CMS method is illustrated with a numerical validation example.

  2. Impact-acoustics-based health monitoring of tile-wall bonding integrity using principal component analysis

    NASA Astrophysics Data System (ADS)

    Tong, F.; Tso, S. K.; Hung, M. Y. Y.

    2006-06-01

    The use of the acoustic features extracted from the impact sounds for bonding integrity assessment has been extensively investigated. Nonetheless, considering the practical implementation of tile-wall non-destructive evaluation (NDE), the traditional defects classification method based directly on frequency-domain features has been of limited application because of the overlapping feature patterns corresponding to different classes whenever there is physical surface irregularity. The purpose of this paper is to explore the clustering and classification ability of principal component analysis (PCA) as applied to the impact-acoustics signature in tile-wall inspection with a view to mitigating the adverse influence of surface non-uniformity. A clustering analysis with signature acquired on sample slabs shows that impact-acoustics signatures of different bonding quality and different surface roughness are well separated into different clusters when using the first two principal components obtained. By adopting as inputs the feature vectors extracted with PCA applied, a multilayer back-propagation artificial neural network (ANN) classifier is developed for automatic health monitoring and defects classification of tile-walls. The inspection results obtained experimentally on the prepared sample slabs are presented and discussed, confirming the utility of the proposed method, particularly in dealing with tile surface irregularity.

  3. Discriminant Incoherent Component Analysis.

    PubMed

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings. PMID:27008268

  4. PATHWAY-BASED ANALYSIS FOR GENOME-WIDE ASSOCIATION STUDIES USING SUPERVISED PRINCIPAL COMPONENTS

    PubMed Central

    Chen, Xi; Wang, Lily; Hu, Bo; Guo, Mingsheng; Barnard, John; Zhu, Xiaofeng

    2012-01-01

    Many complex diseases are influenced by genetic variations in multiple genes, each with only a small marginal effect on disease susceptibility. Pathway analysis, which identifies biological pathways associated with disease outcome, has become increasingly popular for genome-wide association studies (GWAS). In addition to combining weak signals from a number of SNPs in the same pathway, results from pathway analysis also shed light on the biological processes underlying disease. We propose a new pathway-based analysis method for GWAS, the supervised principal component analysis (SPCA) model. In the proposed SPCA model, a selected subset of SNPs most associated with disease outcome is used to estimate the latent variable for a pathway. The estimated latent variable for each pathway is an optimal linear combination of a selected subset of SNPs; therefore, the proposed SPCA model provides the ability to borrow strength across the SNPs in a pathway. In addition to identifying pathways associated with disease outcome, SPCA also carries out additional within-category selection to identify the most important SNPs within each gene set. The proposed model operates in a well-established statistical framework and can handle design information such as covariate adjustment and matching information in GWAS. We compare the proposed method with currently available methods using data with realistic linkage disequilibrium structures and we illustrate the SPCA method using the Wellcome Trust Case-Control Consortium Crohn Disease (CD) dataset. PMID:20842628

  5. A remote sensing image fusion method based on feedback sparse component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Yu, Xianchuan; Pei, Wenjing; Hu, Dan; Zhang, Libao

    2015-12-01

    We propose a new remote sensing image (RSI) fusion technique based on sparse blind source separation theory. Our method employs feedback sparse component analysis (FSCA), which can extract the original image in a step-by-step manner and is robust against noise. For RSIs from the China-Brazil Earth Resources Satellite, FSCA can separate useful surface feature information from redundant information and noise. The FSCA algorithm is therefore used to develop two RSI fusion schemes: one focuses on fusing high-resolution and multi-spectral images, while the other fuses synthetic aperture radar bands. The experimental results show that the proposed method can preserve spectral and spatial details of the source images. For certain evaluation indexes, our method performs better than classical fusion methods.

  6. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  7. A multi-fault diagnosis method for sensor systems based on principle component analysis.

    PubMed

    Zhu, Daqi; Bai, Jie; Yang, Simon X

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time. PMID:22315537

  8. An Image Reconstruction Algorithm for Electrical Capacitance Tomography Based on Robust Principle Component Analysis

    PubMed Central

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-01-01

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. PMID:23385418

  9. Sex-based differences in lifting technique under increasing load conditions: A principal component analysis.

    PubMed

    Sheppard, P S; Stevenson, J M; Graham, R B

    2016-05-01

    The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. PMID:26851478

  10. Contact- and distance-based principal component analysis of protein dynamics

    NASA Astrophysics Data System (ADS)

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-01

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between Cα-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  11. Contact- and distance-based principal component analysis of protein dynamics

    SciTech Connect

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  12. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  13. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  14. Robust principal component analysis-based four-dimensional computed tomography.

    PubMed

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490

  15. Robust principal component analysis-based four-dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.

  16. Robust principal component analysis-based four-dimensional computed tomography

    PubMed Central

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2012-01-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the ‘background’ or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the ‘motion’ or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490

  17. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    PubMed Central

    2010-01-01

    Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036

  18. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    Background In contemporary biology, complex biological processes are increasingly studied by collecting and analyzing measurements of the same entities that are collected with different analytical platforms. Such data comprise a number of data blocks that are coupled via a common mode. The goal of collecting this type of data is to discover biological mechanisms that underlie the behavior of the variables in the different data blocks. The simultaneous component analysis (SCA) family of data analysis methods is suited for this task. However, a SCA may be hampered by the data blocks being subjected to different amounts of measurement error, or noise. To unveil the true mechanisms underlying the data, it could be fruitful to take noise heterogeneity into consideration in the data analysis. Maximum likelihood based SCA (MxLSCA-P) was developed for this purpose. In a previous simulation study it outperformed normal SCA-P. This previous study, however, did not mimic in many respects typical functional genomics data sets, such as, data blocks coupled via the experimental mode, more variables than experimental units, and medium to high correlations between variables. Here, we present a new simulation study in which the usefulness of MxLSCA-P compared to ordinary SCA-P is evaluated within a typical functional genomics setting. Subsequently, the performance of the two methods is evaluated by analysis of a real life Escherichia coli metabolomics data set. Results In the simulation study, MxLSCA-P outperforms SCA-P in terms of recovery of the true underlying scores of the common mode and of the true values underlying the data entries. MxLSCA-P further performed especially better when the simulated data blocks were subject to different noise levels. In the analysis of an E. coli metabolomics data set, MxLSCA-P provided a slightly better and more consistent interpretation. Conclusion MxLSCA-P is a promising addition to the SCA family. The analysis of coupled functional genomics

  19. Raman Based Process Monitor For Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.; Schlahta, Stephan N.

    2008-05-27

    ABSTRACT A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval.

  20. Raman Based Process Monitor for Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, S.; Levitskaia, T.; Schlahta, St.

    2008-07-01

    A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval. (authors)

  1. SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components

    SciTech Connect

    Wendt, R

    2014-06-15

    Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research

  2. Quantitative performance evaluation of a blurring restoration algorithm based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Huebner, Claudia; Marchi, Gabriele

    2008-10-01

    In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.

  3. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  4. Efficient blind dereverberation and echo cancellation based on independent component analysis for actual acoustic signals.

    PubMed

    Takeda, Ryu; Nakadai, Kazuhiro; Takahashi, Toru; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G

    2012-01-01

    This letter presents a new algorithm for blind dereverberation and echo cancellation based on independent component analysis (ICA) for actual acoustic signals. We focus on frequency domain ICA (FD-ICA) because its computational cost and speed of learning convergence are sufficiently reasonable for practical applications such as hands-free speech recognition. In applying conventional FD-ICA as a preprocessing of automatic speech recognition in noisy environments, one of the most critical problems is how to cope with reverberations. To extract a clean signal from the reverberant observation, we model the separation process in the short-time Fourier transform domain and apply the multiple input/output inverse-filtering theorem (MINT) to the FD-ICA separation model. A naive implementation of this method is computationally expensive, because its time complexity is the second order of reverberation time. Therefore, the main issue in dereverberation is to reduce the high computational cost of ICA. In this letter, we reduce the computational complexity to the linear order of the reverberation time by using two techniques: (1) a separation model based on the independence of delayed observed signals with MINT and (2) spatial sphering for preprocessing. Experiments show that the computational cost grows in proportion to the linear order of the reverberation time and that our method improves the word correctness of automatic speech recognition by 10 to 20 points in a RT₂₀= 670 ms reverberant environment. PMID:22023192

  5. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  6. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  7. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods. PMID:26977692

  8. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    SciTech Connect

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  9. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  10. Monitoring of an industrial process by multivariate control charts based on principal component analysis.

    PubMed

    Marengo, Emilio; Gennaro, Maria Carla; Gianotti, Valentina; Robotti, Elisa

    2003-01-01

    The control and monitoring of an industrial process is performed in this paper by the multivariate control charts. The process analysed consists of the bottling of the entire production of 1999 of the sparkling wine "Asti Spumante". This process is characterised by a great number of variables that can be treated with multivariate techniques. The monitoring of the process performed with classical Shewhart charts is very dangerous because they do not take into account the presence of functional relationships between the variables. The industrial process was firstly analysed by multivariate control charts based on Principal Component Analysis. This approach allowed the identification of problems in the process and of their causes. Successively, the SMART Charts (Simultaneous Scores Monitoring And Residual Tracking) were built in order to study the process in its whole. In spite of the successful identification of the presence of problems in the monitored process, the Smart chart did not allow an easy identification of the special causes of variation which casued the problems themselves. PMID:12911145

  11. Cistanches identification based on fluorescent spectral imaging technology combined with principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Dong, Jia; Huang, Furong; Li, Yuanpeng; Xiao, Chi; Xian, Ruiyi; Ma, Zhiguo

    2015-03-01

    In this study, fluorescent spectral imaging technology combined with principal component analysis (PCA) and artificial neural networks (ANNs) was used to identify Cistanche deserticola, Cistanche tubulosa and Cistanche sinensis, which are traditional Chinese medicinal herbs. The fluorescence spectroscopy imaging system acquired the spectral images of 40 cistanche samples, and through image denoising, binarization processing to make sure the effective pixels. Furthermore, drew the spectral curves whose data in the wavelength range of 450-680 nm for the study. Then preprocessed the data by first-order derivative, analyzed the data through principal component analysis and artificial neural network. The results shows: Principal component analysis can generally distinguish cistanches, through further identification by neural networks makes the results more accurate, the correct rate of the testing and training sets is as high as 100%. Based on the fluorescence spectral imaging technique and combined with principal component analysis and artificial neural network to identify cistanches is feasible.

  12. Towards Zero Retraining for Myoelectric Control Based on Common Model Component Analysis.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Zhu, Xiangyang

    2016-04-01

    In spite of several decades of intense research and development, the existing algorithms of myoelectric pattern recognition (MPR) are yet to satisfy the criteria that a practical upper extremity prostheses should fulfill. This study focuses on the criterion of the short, or even zero subject training. Due to the inherent nonstationarity in surface electromyography (sEMG) signals, current myoelectric control algorithms usually need to be retrained daily during a multiple days' usage. This study was conducted based on the hypothesis that there exist some invariant characteristics in the sEMG signals when a subject performs the same motion in different days. Therefore, given a set of classifiers (models) trained on several days, it is possible to find common characteristics among them. To this end, we proposed to use common model component analysis (CMCA) framework, in which an optimized projection was found to minimize the dissimilarity among multiple models of linear discriminant analysis (LDA) trained using data from different days. Five intact-limbed subjects and two transradial amputee subjects participated in an experiment including six sessions of sEMG data recording, which were performed in six different days, to simulate the application of MPR over multiple days. The results demonstrate that CMCA has a significant better generalization ability with unseen data (not included in the training data), leading to classification accuracy improvement and increase of completion rate in a motion test simulation, when comparing with the baseline reference method. The results indicate that CMCA holds a great potential in the effort of developing zero retraining of MPR. PMID:25879963

  13. Lippia origanoides chemotype differentiation based on essential oil GC-MS and principal component analysis.

    PubMed

    Stashenko, Elena E; Martínez, Jairo R; Ruíz, Carlos A; Arias, Ginna; Durán, Camilo; Salgar, William; Cala, Mónica

    2010-01-01

    Chromatographic (GC/flame ionization detection, GC/MS) and statistical analyses were applied to the study of essential oils and extracts obtained from flowers, leaves, and stems of Lippia origanoides plants, growing wild in different Colombian regions. Retention indices, mass spectra, and standard substances were used in the identification of 139 substances detected in these essential oils and extracts. Principal component analysis allowed L. origanoides classification into three chemotypes, characterized according to their essential oil major components. Alpha- and beta-phellandrenes, p-cymene, and limonene distinguished chemotype A; carvacrol and thymol were the distinctive major components of chemotypes B and C, respectively. Pinocembrin (5,7-dihydroxyflavanone) was found in L. origanoides chemotype A supercritical fluid (CO(2)) extract at a concentration of 0.83+/-0.03 mg/g of dry plant material, which makes this plant an interesting source of an important bioactive flavanone with diverse potential applications in cosmetic, food, and pharmaceutical products. PMID:19950347

  14. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  15. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  16. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  17. Envelope extraction based dimension reduction for independent component analysis in fault diagnosis of rolling element bearing

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Na, Jing; Li, Bin; Fung, Rong-Fong

    2014-06-01

    A robust feature extraction scheme for the rolling element bearing (REB) fault diagnosis is proposed by combining the envelope extraction and the independent component analysis (ICA). In the present approach, the envelope extraction is not only utilized to obtain the impulsive component corresponding to the faults from the REB, but also to reduce the dimension of vibration sources included in the sensor-picked signals. Consequently, the difficulty for applying the ICA algorithm under the conditions that the sensor number is limited and the source number is unknown can be successfully eliminated. Then, the ICA algorithm is employed to separate the envelopes according to the independence of vibration sources. Finally, the vibration features related to the REB faults can be separated from disturbances and clearly exposed by the envelope spectrum. Simulations and experimental tests are conducted to validate the proposed method.

  18. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  19. Short prokaryotic DNA fragment binning using a hierarchical classifier based on linear discriminant analysis and principal component analysis.

    PubMed

    Zheng, Hao; Wu, Hongwei

    2010-12-01

    Metagenomics is an emerging field in which the power of genomic analysis is applied to an entire microbial community, bypassing the need to isolate and culture individual microbial species. Assembling of metagenomic DNA fragments is very much like the overlap-layout-consensus procedure for assembling isolated genomes, but is augmented by an additional binning step to differentiate scaffolds, contigs and unassembled reads into various taxonomic groups. In this paper, we employed n-mer oligonucleotide frequencies as the features and developed a hierarchical classifier (PCAHIER) for binning short (≤ 1,000 bps) metagenomic fragments. The principal component analysis was used to reduce the high dimensionality of the feature space. The hierarchical classifier consists of four layers of local classifiers that are implemented based on the linear discriminant analysis. These local classifiers are responsible for binning prokaryotic DNA fragments into superkingdoms, of the same superkingdom into phyla, of the same phylum into genera, and of the same genus into species, respectively. We evaluated the performance of the PCAHIER by using our own simulated data sets as well as the widely used simHC synthetic metagenome data set from the IMG/M system. The effectiveness of the PCAHIER was demonstrated through comparisons against a non-hierarchical classifier, and two existing binning algorithms (TETRA and Phylopythia). PMID:21121023

  20. FPGA-based real-time blind source separation with principal component analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Matthew; Meyer-Baese, Uwe

    2015-05-01

    Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.

  1. Efficient uncertainty quantification in stochastic finite element analysis based on functional principal components

    NASA Astrophysics Data System (ADS)

    Bianchini, Ilaria; Argiento, Raffaele; Auricchio, Ferdinando; Lanzarone, Ettore

    2015-09-01

    The great influence of uncertainties on the behavior of physical systems has always drawn attention to the importance of a stochastic approach to engineering problems. Accordingly, in this paper, we address the problem of solving a Finite Element analysis in the presence of uncertain parameters. We consider an approach in which several solutions of the problem are obtained in correspondence of parameters samples, and propose a novel non-intrusive method, which exploits the functional principal component analysis, to get acceptable computational efforts. Indeed, the proposed approach allows constructing an optimal basis of the solutions space and projecting the full Finite Element problem into a smaller space spanned by this basis. Even if solving the problem in this reduced space is computationally convenient, very good approximations are obtained by upper bounding the error between the full Finite Element solution and the reduced one. Finally, we assess the applicability of the proposed approach through different test cases, obtaining satisfactory results.

  2. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  3. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  4. Online Handwritten Signature Verification Using Neural Network Classifier Based on Principal Component Analysis

    PubMed Central

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  5. Independent component analysis based channel equalization for 6 × 6 MIMO-OFDM transmission over few-mode fiber.

    PubMed

    He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua

    2016-05-01

    We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs. PMID:27137537

  6. Large sample inference for a win ratio analysis of a composite outcome based on prioritized components.

    PubMed

    Bebu, Ionut; Lachin, John M

    2016-01-01

    Composite outcomes are common in clinical trials, especially for multiple time-to-event outcomes (endpoints). The standard approach that uses the time to the first outcome event has important limitations. Several alternative approaches have been proposed to compare treatment versus control, including the proportion in favor of treatment and the win ratio. Herein, we construct tests of significance and confidence intervals in the context of composite outcomes based on prioritized components using the large sample distribution of certain multivariate multi-sample U-statistics. This non-parametric approach provides a general inference for both the proportion in favor of treatment and the win ratio, and can be extended to stratified analyses and the comparison of more than two groups. The proposed methods are illustrated with time-to-event outcomes data from a clinical trial. PMID:26353896

  7. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  8. Study of T-wave morphology parameters based on Principal Components Analysis during acute myocardial ischemia

    NASA Astrophysics Data System (ADS)

    Baglivo, Fabricio Hugo; Arini, Pedro David

    2011-12-01

    Electrocardiographic repolarization abnormalities can be detected by Principal Components Analysis of the T-wave. In this work we studied the efect of signal averaging on the mean value and reproducibility of the ratio of the 2nd to the 1st eigenvalue of T-wave (T21W) and the absolute and relative T-wave residuum (TrelWR and TabsWR) in the ECG during ischemia induced by Percutaneous Coronary Intervention. Also, the intra-subject and inter-subject variability of T-wave parameters have been analyzed. Results showed that TrelWR and TabsWR evaluated from the average of 10 complexes had lower values and higher reproducibility than those obtained from 1 complex. On the other hand T21W calculated from 10 complexes did not show statistical diferences versus the T21W calculated on single beats. The results of this study corroborate that, with a signal averaging technique, the 2nd and the 1st eigenvalue are not afected by noise while the 4th to 8th eigenvalues are so much afected by this, suggesting the use of the signal averaged technique before calculation of absolute and relative T-wave residuum. Finally, we have shown that T-wave morphology parameters present high intra-subject stability.

  9. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix

    PubMed Central

    Gao, H; Zhang, T; Wu, Y; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-01-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent ‘super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle. PMID:24984606

  10. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012). PMID:23162439

  11. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    NASA Astrophysics Data System (ADS)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  12. Analysis of the mineral acid-base components of acid-neutralizing capacity in Adirondack Lakes

    NASA Astrophysics Data System (ADS)

    Munson, R. K.; Gherini, S. A.

    1993-04-01

    Mineral acids and bases influence pH largely through their effects on acid-neutralizing capacity (ANC). This influence becomes particularly significant as ANC approaches zero. Analysis of data collected by the Adirondack Lakes Survey Corporation (ALSC) from 1469 lakes throughout the Adirondack region indicates that variations in ANC in these lakes correlate well with base cation concentrations (CB), but not with the sum of mineral acid anion concentrations (CA). This is because (CA) is relatively constant across the Adirondacks, whereas CB varies widely. Processes that supply base cations to solution are ion-specific. Sodium and silica concentrations are well correlated, indicating a common source, mineral weathering. Calcium and magnesium also covary but do not correlate well with silica. This indicates that ion exchange is a significant source of these cations in the absence of carbonate minerals. Iron and manganese concentrations are elevated in the lower waters of some lakes due to reducing conditions. This leads to an ephemeral increase in CB and ANC. When the lakes mix and oxic conditions are restored, these ions largely precipitate from solution. Sulfate is the dominant mineral acid anion in ALSC lakes. Sulfate concentrations are lowest in seepage lakes, commonly about 40 μeq/L less than in drainage lakes. This is due in part to the longer hydraulic detention time in seepage lakes, which allows slow sulfate reduction reactions more time to decrease lake sulfate concentration. Nitrate typically influences ANC during events such as snowmelt. Chloride concentrations are generally low, except in lakes impacted by road salt.

  13. Design and Analysis of a Novel Six-Component F/T Sensor based on CPM for Passive Compliant Assembly

    NASA Astrophysics Data System (ADS)

    Liang, Qiaokang; Zhang, Dan; Wang, Yaonan; Ge, Yunjian

    2013-10-01

    This paper presents the design and analysis of a six-component Force/Torque (F/T) sensor whose design is based on the mechanism of the Compliant Parallel Mechanism (CPM). The force sensor is used to measure forces along the x-, y-, and z-axis (Fx, Fy and Fz) and moments about the x-, y-, and z-axis (Mx, My and Mz) simultaneously and to provide passive compliance during parts handling and assembly. Particularly, the structural design, the details of the measuring principle and the kinematics are presented. Afterwards, based on the Design of Experiments (DOE) approach provided by the software ANSYS®, a Finite Element Analysis (FEA) is performed. This analysis is performed with the objective of achieving both high sensitivity and isotropy of the sensor. The results of FEA show that the proposed sensor possesses high performance and robustness.

  14. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, H.D.; Ayers, C.W.; Casada, D.A.

    1996-11-26

    A new instrument is described for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician. 4 figs.

  15. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  16. A Class-Information-Based Sparse Component Analysis Method to Identify Differentially Expressed Genes on RNA-Seq Data.

    PubMed

    Liu, Jin-Xing; Xu, Yong; Gao, Ying-Lian; Zheng, Chun-Hou; Wang, Dong; Zhu, Qi

    2016-01-01

    With the development of deep sequencing technologies, many RNA-Seq data have been generated. Researchers have proposed many methods based on the sparse theory to identify the differentially expressed genes from these data. In order to improve the performance of sparse principal component analysis, in this paper, we propose a novel class-information-based sparse component analysis (CISCA) method which introduces the class information via a total scatter matrix. First, CISCA normalizes the RNA-Seq data by using a Poisson model to obtain their differential sections. Second, the total scatter matrix is gotten by combining the between-class and within-class scatter matrices. Third, we decompose the total scatter matrix by using singular value decomposition and construct a new data matrix by using singular values and left singular vectors. Then, aiming at obtaining sparse components, CISCA decomposes the constructed data matrix by solving an optimization problem with sparse constraints on loading vectors. Finally, the differentially expressed genes are identified by using the sparse loading vectors. The results on simulation and real RNA-Seq data demonstrate that our method is effective and suitable for analyzing these data. PMID:27045835

  17. Design and Validation of a Morphing Myoelectric Hand Posture Controller Based on Principal Component Analysis of Human Grasping

    PubMed Central

    Segil, Jacob L.; Weir, Richard F. ff.

    2015-01-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately. PMID:23649286

  18. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  19. Development of a new signal processing algorithm based on independent component analysis for single channel ECG data.

    PubMed

    Lee, J; Lee, K J; Yoo, S K

    2004-01-01

    In this paper, we proposed a new signal processing algorithm based on independent component analysis (ICA) for single channel ECG data. For the application ICA to single channel data, mixed (multi-channel) signals are constructed by adding some delay to original data. By ICA, signal enhancement is acquired. For validation of usefulness of this signal, QRS complex detection was accompanied. In QRS detection process, Hilbert transform and wavelet transform were used and good QRS detection efficacy was obtained. Furthermore, a signal, which could not be filtered properly using existing algorithm, also had better signal enhancement. In future, we need to study on the algorithm optimization and simplification. PMID:17271650

  20. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    SciTech Connect

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  1. A review of component analysis based on magnetization curves: state-of-the art and future developments.

    NASA Astrophysics Data System (ADS)

    Egli, R.

    2005-05-01

    Rocks and sediments inevitably contain mixtures of magnetic minerals, grain sizes, and weathering states. Most rock magnetic interpretation techniques rely on a set of value parameters, such as susceptibility and isothermal/anhysteretic remanent magnetization (ARM or IRM). These parameters are usually interpreted in terms of mineralogy and domain state of the magnetic particles. In some cases, such interpretation of natural samples can be misleading or inconclusive. A less constrained approach to magnetic mineralogy models is based on the analysis of magnetization curves, which are decomposed into a set of elementary contributions. Each contribution is called a magnetic component, and characterizes a specific set of magnetic grains with a unimodal distribution of physical and chemical properties. Magnetic components are related to specific biogeochemical signatures rather than representing traditional categories, such as SD magnetite. This unconventional approach gives a direct link to the interpretation of natural processes on a multidisciplinary level. Despite the aforementioned advantages, component analysis is not yet come into wide use for three reasons: 1) the lack of quantitative magnetic models for natural, non-ideal magnetic grains and/or the statistical distribution of their properties, 2) the intrinsic mathematical complexity of unmixing problems, and 3) the need of accurate measurements that are beyond the usual standards. Since magnetic components rarely occur alone in natural samples, unmixing techniques and rock magnetic models are interdependent. A big effort has been recently undertaken to verify the basic properties of magnetization curves and obtain useful and reliable solutions of the unmixing problem. The result of this experience is a collection of a few hundred magnetic components identified in various natural environments. The properties of these components are controlled by their biogeochemical history, regardless of the provenance of the

  2. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-01-01

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622

  3. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    PubMed Central

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-01-01

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622

  4. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  5. ECG-based gating in ultra high field cardiovascular magnetic resonance using an independent component analysis approach

    PubMed Central

    2013-01-01

    Background In Cardiovascular Magnetic Resonance (CMR), the synchronization of image acquisition with heart motion is performed in clinical practice by processing the electrocardiogram (ECG). The ECG-based synchronization is well established for MR scanners with magnetic fields up to 3 T. However, this technique is prone to errors in ultra high field environments, e.g. in 7 T MR scanners as used in research applications. The high magnetic fields cause severe magnetohydrodynamic (MHD) effects which disturb the ECG signal. Image synchronization is thus less reliable and yields artefacts in CMR images. Methods A strategy based on Independent Component Analysis (ICA) was pursued in this work to enhance the ECG contribution and attenuate the MHD effect. ICA was applied to 12-lead ECG signals recorded inside a 7 T MR scanner. An automatic source identification procedure was proposed to identify an independent component (IC) dominated by the ECG signal. The identified IC was then used for detecting the R-peaks. The presented ICA-based method was compared to other R-peak detection methods using 1) the raw ECG signal, 2) the raw vectorcardiogram (VCG), 3) the state-of-the-art gating technique based on the VCG, 4) an updated version of the VCG-based approach and 5) the ICA of the VCG. Results ECG signals from eight volunteers were recorded inside the MR scanner. Recordings with an overall length of 87 min accounting for 5457 QRS complexes were available for the analysis. The records were divided into a training and a test dataset. In terms of R-peak detection within the test dataset, the proposed ICA-based algorithm achieved a detection performance with an average sensitivity (Se) of 99.2%, a positive predictive value (+P) of 99.1%, with an average trigger delay and jitter of 5.8 ms and 5.0 ms, respectively. Long term stability of the demixing matrix was shown based on two measurements of the same subject, each being separated by one year, whereas an averaged detection

  6. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  7. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  8. Bearing fault recognition method based on neighbourhood component analysis and coupled hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Hongchao; Yuan, Haodong

    2016-01-01

    Due to the important role rolling element bearings play in rotating machines, condition monitoring and fault diagnosis system should be established to avoid abrupt breakage during operation. Various features from time, frequency and time-frequency domain are usually used for bearing or machinery condition monitoring. In this study, NCA-based feature extraction (FE) approach is proposed to reduce the dimensionality of original feature set and avoid the "curse of dimensionality". Furthermore, coupled hidden Markov model (CHMM) based on multichannel data acquisition is applied to diagnose bearing or machinery fault. Two case studies are presented to validate the proposed approach both in bearing fault diagnosis and fault severity classification. The experiment results show that the proposed NCA-CHMM can remove redundant information, fuse data from different channels and improve the diagnosis results.

  9. An Assessment of Hermite Function Based Approximations of Mutual Information Applied to Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Sorensen, Julian

    2008-12-01

    At the heart of many ICA techniques is a nonparametric estimate of an information measure, usually via nonparametric density estimation, for example, kernel density estimation. While not as popular as kernel density estimators, orthogonal functions can be used for nonparametric density estimation (via a truncated series expansion whose coefficients are calculated from the observed data). While such estimators do not necessarily yield a valid density, which kernel density estimators do, they are faster to calculate than kernel density estimators, in particular for a modified version of Renyi's entropy of order 2. In this paper, we compare the performance of ICA using Hermite series based estimates of Shannon's and Renyi's mutual information, to that of Gaussian kernel based estimates. The comparisons also include ICA using the RADICAL estimate of Shannon's entropy and a FastICA estimate of neg-entropy.

  10. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  11. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  12. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    SciTech Connect

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-07

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C{sub α} atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  13. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  14. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  15. Large Sample Group Independent Component Analysis of Functional Magnetic Resonance Imaging Using Anatomical Atlas-Based Reduction and Bootstrapped Clustering

    PubMed Central

    Anderson, Ariana; Bramen, Jennifer; Douglas, Pamela K.; Lenartowicz, Agatha; Cho, Andrew; Culbertson, Chris; Brody, Arthur L.; Yuille, Alan L.; Cohen, Mark S.

    2011-01-01

    Independent component analysis (ICA) is a popular method for the analysis of functional magnetic resonance imaging (fMRI) signals that is capable of revealing connected brain systems of functional significance. To be computationally tractable, estimating the independent components (ICs) inevitably requires one or more dimension reduction steps. Whereas most algorithms perform such reductions in the time domain, the input data are much more extensive in the spatial domain, and there is broad consensus that the brain obeys rules of localization of function into regions that are smaller in number than the number of voxels in a brain image. These functional units apparently reorganize dynamically into networks under different task conditions. Here we develop a new approach to ICA, producing group results by bagging and clustering over hundreds of pooled single-subject ICA results that have been projected to a lower-dimensional subspace. Averages of anatomically based regions are used to compress the single subject-ICA results prior to clustering and resampling via bagging. The computational advantages of this approach make it possible to perform group-level analyses on datasets consisting of hundreds of scan sessions by combining the results of within-subject analysis, while retaining the theoretical advantage of mimicking what is known of the functional organization of the brain. The result is a compact set of spatial activity patterns that are common and stable across scan sessions and across individuals. Such representations may be used in the context of statistical pattern recognition supporting real-time state classification. PMID:22049263

  16. Equity in health care in Namibia: developing a needs-based resource allocation formula using principal components analysis

    PubMed Central

    Zere, Eyob; Mandlhate, Custodia; Mbeeli, Thomas; Shangula, Kalumbi; Mutirua, Kauto; Kapenambili, William

    2007-01-01

    Background The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Methods Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Results Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. Conclusion To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care. PMID:17391533

  17. Failure Analysis of Ceramic Components

    SciTech Connect

    B.W. Morris

    2000-06-29

    Ceramics are being considered for a wide range of structural applications due to their low density and their ability to retain strength at high temperatures. The inherent brittleness of monolithic ceramics requires a departure from the deterministic design philosophy utilized to analyze metallic structural components. The design program ''Ceramic Analysis and Reliability Evaluation of Structures Life'' (CARES/LIFE) developed by NASA Lewis Research Center uses a probabilistic approach to predict the reliability of monolithic components under operational loading. The objective of this study was to develop an understanding of the theories used by CARES/LIFE to predict the reliability of ceramic components and to assess the ability of CARES/LIFE to accurately predict the fast fracture behavior of monolithic ceramic components. A finite element analysis was performed to determine the temperature and stress distribution of a silicon carbide O-ring under diametral compression. The results of the finite element analysis were supplied as input into CARES/LIFE to determine the fast fracture reliability of the O-ring. Statistical material strength parameters were calculated from four-point flexure bar test data. The predicted reliability showed excellent correlation with O-ring compression test data indicating that the CARES/LIFE program can be used to predict the reliability of ceramic components subjected to complicated stress states using material properties determined from simple uniaxial tensile tests.

  18. Simultaneous multi-wavelength phase-shifting interferometry based on principal component analysis with a color CMOS

    NASA Astrophysics Data System (ADS)

    Fan, Jingping; Lu, Xiaoxu; Xu, Xiaofei; Zhong, Liyun

    2016-05-01

    From a sequence of simultaneous multi-wavelength phase-shifting interferograms (SMWPSIs) recorded by a color CMOS, a principal component analysis (PCA) based multi-wavelength interferometry (MWI) is proposed. First, a sequence of SMWPSIs with unknown phase shifts are recorded with a single-chip color CMOS camera. Subsequently, the wrapped phases of single-wavelength are retrieved with the PCA algorithm. Finally, the unambiguous phase of the extended synthetic wavelength is achieved by the subtraction between the wrapped phases of single-wavelength. In addition, to eliminate the additional phase introduced by the microscope and intensity crosstalk among three-color channels, a two-step phase compensation method with and without the measured object in the experimental system is employed. Compared with conventional single-wavelength phase-shifting interferometry, due to no requirements for phase shifts calibration and the phase unwrapping operation, the actual unambiguous phase of the measured object can be achieved with the proposed PCA-based MWI method conveniently. Both numerical simulations and experimental results demonstrate that the proposed PCA-based MWI method can enlarge not only the measuring range, but also no amplification of noise level.

  19. Application of independent component analysis method in real-time spectral analysis of gaseous mixtures for acousto-optical spectrometers based on differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Fadeyev, A. V.; Pozhar, V. E.

    2012-10-01

    It is discussed the reliability problem of time-optimized method for remote optical spectral analysis of gas-polluted ambient air. The method based on differential optical absorption spectroscopy (DOAS) enables fragmentary spectrum registration (FSR) and is suitable for random-spectral-access (RSA) optical spectrometers like acousto-optical (AO) ones. Here, it is proposed the algorithm based on statistical method of independent component analysis (ICA) for estimation of a correctness of absorption spectral lines selection for FSR-method. Implementations of ICA method for RSA-based real-time adaptive systems are considered. Numerical simulations are presented with use of real spectra detected by the trace gas monitoring system GAOS based on AO spectrometer.

  20. Source-Based Morphometry: The Use of Independent Component Analysis to Identify Gray Matter Differences With Application to Schizophrenia

    PubMed Central

    Xu, Lai; Groth, Karyn M.; Pearlson, Godfrey; Schretlen, David J.; Calhoun, Vince D.

    2009-01-01

    We present a multivariate alternative to the voxel-based morphometry (VBM) approach called source-based morphometry (SBM), to study gray matter differences between patients and healthy controls. The SBM approach begins with the same preprocessing procedures as VBM. Next, independent component analysis is used to identify naturally grouping, maximally independent sources. Finally, statistical analyses are used to determine the significant sources and their relationship to other variables. The identified “source networks,” groups of spatially distinct regions with common covariation among subjects, provide information about localization of gray matter changes and their variation among individuals. In this study, we first compared VBM and SBM via a simulation and then applied both methods to real data obtained from 120 chronic schizophrenia patients and 120 healthy controls. SBM identified five gray matter sources as significantly associated with schizophrenia. These included sources in the bilateral temporal lobes, thalamus, basal ganglia, parietal lobe, and frontotemporal regions. None of these showed an effect of sex. Two sources in the bilateral temporal and parietal lobes showed age-related reductions. The most significant source of schizophrenia-related gray matter changes identified by SBM occurred in the bilateral temporal lobe, while the most significant change found by VBM occurred in the thalamus. The SBM approach found changes not identified by VBM in basal ganglia, parietal, and occipital lobe. These findings show that SBM is a multivariate alternative to VBM, with wide applicability to studying changes in brain structure. PMID:18266214

  1. Ground-roll separation of seismic data based on morphological component analysis in two-dimensional domain

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Hong; Qu, Guang-Zhong; Zhang, Yang; Bi, Yun-Yun; Wang, Jin-Ju

    2016-03-01

    Ground roll is an interference wave that severely degrades the signal-to-noise ratio of seismic data and affects its subsequent processing and interpretation. In this study, according to differences in morphological characteristics between ground roll and reflected waves, we use morphological component analysis based on two-dimensional dictionaries to separate ground roll and reflected waves. Because ground roll is characterized by low-frequency, low-velocity, and dispersion, we select two-dimensional undecimated discrete wavelet transform as a sparse representation dictionary of ground roll. Because of a strong local correlation of the reflected wave, we select two-dimensional local discrete cosine transform as the sparse representation dictionary of reflected waves. A sparse representation model of seismic data is constructed based on a two-dimensional joint dictionary then a block coordinate relaxation algorithm is used to solve the model and decompose seismic record into reflected wave part and ground roll part.The good effects for the synthetic seismic data and application of real seismic data indicate that when using the model, strong-energy ground roll is considerably suppressed and the waveform of the reflected wave is effectively protected.

  2. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  3. High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang

    2016-07-01

    Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.

  4. Multivariate Principal Component Analysis and Case-Based Reasoning for monitoring, fault detection and diagnosis in a WWTP.

    PubMed

    Ruiz, Magda; Sin, Gürkan; Berjaga, Xavier; Colprim, Jesús; Puig, Sebastià; Colomer, Joan

    2011-01-01

    The main idea of this paper is to develop a methodology for process monitoring, fault detection and predictive diagnosis of a WasteWater Treatment Plant (WWTP). To achieve this goal, a combination of Multiway Principal Component Analysis (MPCA) and Case-Based Reasoning (CBR) is proposed. First, MPCA is used to reduce the multi-dimensional nature of online process data, which summarises most of the variance of the process data in a few (new) variables. Next, the outputs of MPCA (t-scores, Q-statistic) are provided as inputs (descriptors) to the CBR method, which is employed to identify problems and propose appropriate solutions (hence diagnosis) based on previously stored cases. The methodology is evaluated on a pilot-scale SBR performing nitrogen, phosphorus and COD removal and to help to diagnose abnormal situations in the process operation. Finally, it is believed that the methodology is a promising tool for automatic diagnosis and real-time warning, which can be used for daily management of plant operation. PMID:22335109

  5. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    PubMed Central

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  6. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    PubMed

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  7. Quantitative Profiling of Polar Metabolites in Herbal Medicine Injections for Multivariate Statistical Evaluation Based on Independence Principal Component Analysis

    PubMed Central

    Wang, Yuefei; Xu, Lei; Wang, Meng; Zhao, Buchang; Jia, Lifu; Pan, Hao; Zhu, Yan; Gao, Xiumei

    2014-01-01

    Botanical primary metabolites extensively exist in herbal medicine injections (HMIs), but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR) profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI), was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA), the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs). IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids) of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs. PMID:25157567

  8. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  9. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    PubMed

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species

  10. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  11. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process. PMID:26974488

  12. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  13. [Analysis and comparison of intestinal absorption of components of Gegenqinlian decoction in different combinations based on pharmacokinetic parameters].

    PubMed

    Zhang, Yi-Zhu; An, Rui; Yuan, Jin; Wang, Yue; Gu, Qing-Qing; Wang, Xin-Hong

    2013-10-01

    To analyse and compare the characteristics of the intestinal absorption of puerarin, baicalin, berberine and liquiritin in different combinations of Gegenqinlian decoction based on pharmacokinetic parameters, a sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) method was applied for the quantification of four components in rat's plasma. And pharmacokinetic parameters were determined from the plasma concentration-time data with the DAS software package. The influence of different combinations on pharmacokinetics of four components was studied to analyse and compare the absorption difference of four components, together with the results of the in vitro everted gut model and the rat single pass intestinal perfusion model. The results showed that compared with other combinations, the AUC values of puerarin, baicalin and berberine were increased significantly in Gegenqinlian decoction group, while the AUC value of liquiritin was reduced. Moreover, the absorption of four components was increased significantly supported by the results from the in vitro everted gut model and the rat single pass intestinal perfusion model, which indicated that the Gegenqinlian decoction may promote the absorption of four components and accelerate the metabolism of liquiritin by the cytochrome P450. PMID:24417090

  14. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  15. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  16. A process-based analysis of ocean heat uptake in an AOGCM with an eddy-permitting ocean component

    NASA Astrophysics Data System (ADS)

    Kuhlbrodt, T.; Gregory, J. M.; Shaffrey, L. C.

    2015-12-01

    About 90 % of the anthropogenic increase in heat stored in the climate system is found in the oceans. Therefore it is relevant to understand the details of ocean heat uptake. Here we present a detailed, process-based analysis of ocean heat uptake (OHU) processes in HiGEM1.2, an atmosphere-ocean general circulation model with an eddy-permitting ocean component of 1/3° resolution. Similarly to various other models, HiGEM1.2 shows that the global heat budget is dominated by a downward advection of heat compensated by upward isopycnal diffusion. Only in the upper tropical ocean do we find the classical balance between downward diapycnal diffusion and upward advection of heat. The upward isopycnal diffusion of heat is located mostly in the Southern Ocean, which thus dominates the global heat budget. We compare the responses to a 4xCO2 forcing and an enhancement of the windstress forcing in the Southern Ocean. This highlights the importance of regional processes for the global ocean heat uptake. These are mainly surface fluxes and convection in the high latitudes, and advection in the Southern Ocean mid-latitudes. Changes in diffusion are less important. In line with the CMIP5 models, HiGEM1.2 shows a band of strong OHU in the mid-latitude Southern Ocean in the 4xCO2 run, which is mostly advective. By contrast, in the high-latitude Southern Ocean regions it is the suppression of convection that leads to OHU. In the enhanced windstress run, convection is strengthened at high Southern latitudes, leading to heat loss, while the magnitude of the OHU in the Southern mid-latitudes is very similar to the 4xCO2 results. Remarkably, there is only very small global OHU in the enhanced windstress run. The wind stress forcing just leads to a redistribution of heat. We relate the ocean changes at high Southern latitudes to the effect of climate change on the Antarctic Circumpolar Current (ACC). It weakens in the 4xCO2 run and strengthens in the wind stress run. The weakening is due

  17. System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Wada, Ted S.

    In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the acoustic echo cancellation (AEC) problem. Such a “system” perspective aims for the integration of individual components, or algorithms, into a cohesive unit for the benefit of the system as a whole to cope with real-world enhancement problems. The standard system identification approach by minimizing the mean square error (MSE) of a linear system is sensitive to distortions that greatly affect the quality of the identification result. Therefore, we begin by examining in detail the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of the LMS algorithm when there is an interference at the near end, where the source of distortion may be linear or nonlinear. We provide a thorough derivation and analysis of the error recovery nonlinearity (ERN) that “enhances” the filter estimation error prior to the adaptation to transform the corrupted error’s distribution into a desired one, or very close to it, in order to assist the linear adaptation process. We reveal important connections of the residual echo enhancement (REE) technique to other existing AEC and signal enhancement procedures, where the technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. Notably, the single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. Indeed, SBSS optimized via ICA leads to the system combination of the LMS algorithm with the ERN that allows continuous and stable adaptation even during double talk. Next, we extend the system perspective

  18. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  19. Structured Functional Principal Component Analysis

    PubMed Central

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M.; Greven, Sonja

    2015-01-01

    Summary Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  20. Structured functional principal component analysis.

    PubMed

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M; Greven, Sonja

    2015-03-01

    Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  1. Evaluation of the aroma quality of Chinese traditional soy paste during storage based on principal component analysis.

    PubMed

    Peng, Xingyun; Li, Xin; Shi, Xiaodi; Guo, Shuntang

    2014-05-15

    Soy paste, a fermented soybean product, is widely used for flavouring in East and Southeast Asian countries. The characteristic aroma of soy paste is important throughout its shelf life. This study extracted volatile compounds via headspace solid-phase microextraction and conducted a quantitative analysis of 15 key volatile compounds using gas chromatography and gas chromatography-mass spectrum analysis. Changes in aroma content during storage time were analyzed using an acceleration model (40 °C, 28 days). In the 28 days of storage, results showed that among key soy paste volatile compounds, alcohol and aldehyde contents decreased by 35% and 26%, respectively. By contrast, acid, ester, and heterocycle contents increased by 130%, 242%, and 15%, respectively. The overall odour type transformed from a floral to a roasting aroma. According to sample clustering in the principal component analysis, the storage life of soy paste could be divided into three periods. These three periods represent the floral, roasting, and pungent aroma types of soy paste. PMID:24423567

  2. Application of independent component analysis to ac dipole based optics measurement and correction at the Relativistic Heavy Ion Collider

    NASA Astrophysics Data System (ADS)

    Shen, X.; Lee, S. Y.; Bai, M.; White, S.; Robert-Demolaize, G.; Luo, Y.; Marusic, A.; Tomás, R.

    2013-11-01

    Correction of beta-beat is of great importance for performance improvement of high energy accelerators, like the Relativistic Hadron Ion Collider (RHIC). At RHIC, using the independent component analysis method, linear optical functions are extracted from the turn by turn beam position data of the ac dipole driven betatron oscillation. Despite the constraint of a limited number of available quadrupole correctors at RHIC, a global beta-beat correction scheme using a beta-beat response matrix method was developed and experimentally demonstrated. In both rings, a factor of 2 or better reduction of beta-beat was achieved within available beam time. At the same time, a new scheme of using horizontal closed orbit bump at sextupoles to correct beta-beat in the arcs was demonstrated in the Yellow ring of RHIC at beam energy of 255 GeV, and a peak beta-beat of approximately 7% was achieved.

  3. EP component identification and measurement by principal components analysis.

    PubMed

    Chapman, R M; McCrary, J W

    1995-04-01

    Between the acquisition of Evoked Potential (EP) data and their interpretation lies a major problem: What to measure? An approach to this kind of problem is outlined here in terms of Principal Components Analysis (PCA). An important second theme is that experimental manipulation is important to functional interpretation. It would be desirable to have a system of EP measurement with the following characteristics: (1) represent the data in a concise, parsimonous way; (2) determine EP components from the data without assuming in advance any particular waveforms for the components; (3) extract components which are independent of each other; (4) measure the amounts (contributions) of various components in observed EPs; (5) use measures that have greater reliability than measures at any single time point or peak; and (6) identify and measure components that overlap in time. PCA has these desirable characteristics. Simulations are illustrated. PCA's beauty also has some warts that are discussed. In addition to discussing the usual two-mode model of PCA, an extension of PCA to a three-mode model is described that provides separate parameters for (1) waveforms over time, (2) coefficients for spatial distribution, and (3) scores telling the amount of each component in each EP. PCA is compared with more traditional approaches. Some biophysical considerations are briefly discussed. Choices to be made in applying PCA are considered. Other issues include misallocation of variance, overlapping components, validation, and latency changes. PMID:7626278

  4. Global Observations of SO2 and HCHO Using an Innovative Algorithm based on Principal Component Analysis of Satellite Radiance Data

    NASA Astrophysics Data System (ADS)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay; Fioletov, Vitali; McLinden, Chris

    2015-04-01

    We report on the latest progress in the development and application of a new trace gas retrieval algorithm for spaceborne UV-VIS spectrometers. Developed at NASA Goddard Space Flight Center, this algorithm utilizes the principal component analysis (PCA) technique to extract a series of spectral features (principal components or PCs) explaining the variance of measured reflectance spectra. For a species of interests that has no or very small background signals such as SO2 or HCHO, the leading PCs (that explain the most variance) obtained over the clean areas are generally associated with various physical processes (e.g., ozone absorption, rotational Raman scattering) and measurement details (e.g., wavelength shift) other than the signals of interests. By fitting these PCs and pre-computed Jacobians for the target species to a measured radiance spectrum, we can then estimate its atmospheric loading. The PCA algorithm has been operationally implemented to produce the new generation NASA Aura/OMI standard planetary boundary layer (PBL) SO2 product. Comparison with the previous OMI PBL SO2 product indicates that the PCA algorithm reduces the retrieval noise by a factor of two and greatly improves the data quality, allowing detection of smaller point SO2 pollution sources that have not been previously measured from space. We have also demonstrated the algorithm for SO2 retrievals using the new NASA/NOAA S-NPP/OMPS UV spectrometer. For HCHO, the new algorithm shows great promise as evidenced by results obtained from both OMI and OMPS. Finally, we discuss the most recent progress in the algorithm development, including the implementation of a new Jacobians lookup table to more appropriately account for the sensitivity of satellite sensors to various measurement conditions (e.g., viewing geometry, surface reflectance and cloudiness).

  5. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  6. Associated neural network independent component analysis structure

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Kostrzweski, Andrew

    2006-05-01

    Detection, classification, and localization of potential security breaches in extremely high-noise environments are important for perimeter protection and threat detection both for homeland security and for military force protection. Physical Optics Corporation has developed a threat detection system to separate acoustic signatures from unknown, mixed sources embedded in extremely high-noise environments where signal-to-noise ratios (SNRs) are very low. Associated neural network structures based on independent component analysis are designed to detect/separate new acoustic sources and to provide reliability information. The structures are tested through computer simulations for each critical component, including a spontaneous detection algorithm for potential threat detection without a predefined knowledge base, a fast target separation algorithm, and nonparametric methodology for quantified confidence measure. The results show that the method discussed can separate hidden acoustic sources of SNR in 5 dB noisy environments with an accuracy of 80%.

  7. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  8. Adaptive detection method of infrared small target based on target-background separation via robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chuanyun; Qin, Shiyin

    2015-03-01

    Motivated by the robust principal component analysis, infrared small target image is regarded as low-rank background matrix corrupted by sparse target and noise matrices, thus a new target-background separation model is designed, subsequently, an adaptive detection method of infrared small target is presented. Firstly, multi-scale transform and patch transform are used to generate an image patch set for infrared small target detection; secondly, target-background separation of each patch is achieved by recovering the low-rank and sparse matrices using adaptive weighting parameter; thirdly, the image reconstruction and fusion are carried out to obtain the entire separated background and target images; finally, the infrared small target detection is realized by threshold segmentation of template matching similarity measurement. In order to validate the performance of the proposed method, three experiments: target-background separation, background clutter suppression and infrared small target detection, are performed over different clutter background with real infrared small targets in single-frame or sequence images. A series of experiment results demonstrate that the proposed method can not only suppress background clutter effectively even if with strong noise interference but also detect targets accurately with low false alarm rate.

  9. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  10. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  11. The double K+/Ca2+ sensor based on laser scanned silicon transducer (LSST) for multi-component analysis.

    PubMed

    Ermolenko, Yu; Yoshinobu, T; Mourzina, Yu; Furuichi, K; Levichev, S; Schöning, M J; Vlasov, Yu; Iwasaki, H

    2003-03-10

    In the present work a double ion sensor based on a laser scanned semiconductor transducer (LSST) for the simultaneous determination of K(+)- and Ca(2+)-ions in solutions has been developed. Specially elaborated ion-sensitive membrane compositions based on valinomycin and calcium ionophore calcium bis[4-(1,1,3,3-tetramethylbutyl)phenyl] phosphate (t-HDOPP-Ca) were deposited as separate layers on a silanized surface of the Si/SiO(2)/Si(3)N(4)-transducer. The proposed multi-sensor exhibits theoretical sensitivities and the detection limits of the sensor were found to be 2 x 10(-6) mol l(-1) for K(+) and 5 x 10(-6) mol l(-1) for Ca(2+). The elaborated double sensor is proposed for the first time as a prototype of a new type of multi-sensor systems for chemical analysis. PMID:18968966

  12. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  13. Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  14. Skill Components of Task Analysis

    ERIC Educational Resources Information Center

    Adams, Anne E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices' problems with learning Hierarchical Task…

  15. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  16. Fault tree analysis of nuclear power plant components and systems. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    SciTech Connect

    Not Available

    1992-09-01

    The bibliography contains citations concerning risk assessment, reliability analysis, failure analysis, and safety studies of nuclear power plant components and systems using fault tree analysis methods. Faults caused by components, human error, environmental considerations, and common mode failures are presented. Various systems and components are analyzed, including high pressure safety injection, auxiliary feedwater, instrumentation, emergency core flooding and cooling, and steam generator tubing. (Contains a minimum of 59 citations and includes a subject term index and title list.)

  17. Multivariate streamflow forecasting using independent component analysis

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Sharma, Ashish; Brown, Casey; Lall, Upmanu

    2008-02-01

    Seasonal forecasting of streamflow provides many benefits to society, by improving our ability to plan and adapt to changing water supplies. A common approach to developing these forecasts is to use statistical methods that link a set of predictors representing climate state as it relates to historical streamflow, and then using this model to project streamflow one or more seasons in advance based on current or a projected climate state. We present an approach for forecasting multivariate time series using independent component analysis (ICA) to transform the multivariate data to a set of univariate time series that are mutually independent, thereby allowing for the much broader class of univariate models to provide seasonal forecasts for each transformed series. Uncertainty is incorporated by bootstrapping the error component of each univariate model so that the probability distribution of the errors is maintained. Although all analyses are performed on univariate time series, the spatial dependence of the streamflow is captured by applying the inverse ICA transform to the predicted univariate series. We demonstrate the technique on a multivariate streamflow data set in Colombia, South America, by comparing the results to a range of other commonly used forecasting methods. The results show that the ICA-based technique is significantly better at representing spatial dependence, while not resulting in any loss of ability in capturing temporal dependence. As such, the ICA-based technique would be expected to yield considerable advantages when used in a probabilistic setting to manage large reservoir systems with multiple inflows or data collection points.

  18. Finite Element Based Stress Analysis of Graphite Component in High Temperature Gas Cooled Reactor Core Using Linear and Nonlinear Irradiation Creep Models

    SciTech Connect

    Mohanty, Subhasish; Majumdar, Saurindranath

    2015-01-01

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  19. Face Recognition by Independent Component Analysis

    PubMed Central

    Bartlett, Marian Stewart; Movellan, Javier R.; Sejnowski, Terrence J.

    2010-01-01

    A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. PMID:18244540

  20. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  1. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  2. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study

    PubMed Central

    Kreuzer, Stefan W.; Pourmoghaddam, Amir; Leffers, Kevin J.; Johnson, Clint W.; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation. PMID:26881086

  3. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  4. Fatigue analysis codes for WECS components

    SciTech Connect

    Sutherland, H.J.; Ashwill, T.D.; Naassan, K.A.

    1987-10-01

    This Manuscript discusses two numerical techniques, the LIFE and the LIFE2 codes, that analyze the fatigue life of WECS components. The LIFE code is a PC-compatible Basic code that analyzes the fatigue life of a VAWT component. The LIFE2 code is a PC-compatible Fortran code that relaxes the rather restrictive assumptions of the LIFE code and permits the analysis of the fatigue life of all WECS components. Also, the modular format of the LIFE2 code permits the code to be revised, with minimal effort, to include additional analysis while maintaining its integrity. To illustrate the use of the codes, an example problem is presented. 10 refs.

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  6. Component fragilities. Data collection, analysis and interpretation

    SciTech Connect

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists.

  7. Principal component analysis for the forensic discrimination of black inkjet inks based on the Vis-NIR fibre optics reflection spectra.

    PubMed

    Gál, Lukáš; Oravec, Michal; Gemeiner, Pavol; Čeppan, Michal

    2015-12-01

    Nineteen black inkjet inks of six different brands were examined by fibre optics reflection spectroscopy in Visible and Near Infrared Region (Vis-NIR FORS) directly on paper with a view to achieving good resolution between them. These different inks were tested on nineteen different inkjet printers from three brands. Samples were obtained from prints by reflection probe. Processed reflection spectra in the range 500-1000 nm were used as samples in principal component analysis. Variability between spectra of the same ink obtained from different prints, as well as between spectra of square areas and lines was examined. For both spectra obtained from square areas and lines reference, Principal Component Analysis (PCA) models were created. According to these models, the inkjet inks were divided into clusters. PCA method is able to separate inks containing carbon black as main colorant from the other inks using other colorants. Some spectra were recorded from another piece of printer and used as validation samples. Spectra of validation samples were projected onto reference PCA models. According to position of validation samples in score plots it can be concluded that PCA based on Vis-NIR FORS can reliably differentiate inkjet inks which are included in the reference database. The presented method appears to be a suitable tool for forensic examination of questioned documents containing inkjet inks. Inkjet inks spectra were obtained without extraction or cutting sample with possibility to measure out of the laboratory. PMID:26448533

  8. [Analysis of major components in water based stamp pad inks and their imprints by ultra high performance liquid chromatography-mass spectrometry and gas chromatography-mass spectrometry].

    PubMed

    Zhang, Qing; Zou, Jixin; Shi, Gaojun; Zhang, Lijuan

    2010-12-01

    Ultra high performance liquid chromatography-mass spectrometry (UHPLC-MS) technology and gas chromatography-mass spectrometry (GC-MS) technology were used to qualitatively analyze the major components in water based stamp pad inks including major colorants and volatile components. After the samples were supersonically extracted and then centrifuged, UHPLC-MS was used to separate and identify the major colorants. A ZORBAX Eclipse Plus Phenyl-Hexyl (50 mm x 4.6 mm, 1.8 microm) column and 15 mmol/L ammonium acetate-acetonitrile were utilized for the separation and negative selected ion monitoring mode (SIM) was set for the MS analysis. An HP-INNOWAX (30 m x 0.25 mm, 0.25 microm) column was employed in the GC-MS analysis with the full-scan mode to determine the volatiles. This study demonstrated that the major colorants in the inks and their imprints were Acid Red R, Eosin Y and Pigment Red 112; and the major volatiles were glycerol, 1,2-propanediol, etc. The method is rapid and accurate. It also demonstrates that the method can meet the requirements for imprint determination in material evidence identification. The work provides a reliable tool for the categorization research in the forensic sciences. PMID:21438364

  9. PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Wang, Weichen

    2016-01-01

    This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374

  10. Energy component analysis of π interactions.

    PubMed

    Sherrill, C David

    2013-04-16

    Fundamental features of biomolecules, such as their structure, solvation, and crystal packing and even the docking of drugs, rely on noncovalent interactions. Theory can help elucidate the nature of these interactions, and energy component analysis reveals the contributions from the various intermolecular forces: electrostatics, London dispersion terms, induction (polarization), and short-range exchange-repulsion. Symmetry-adapted perturbation theory (SAPT) provides one method for this type of analysis. In this Account, we show several examples of how SAPT provides insight into the nature of noncovalent π-interactions. In cation-π interactions, the cation strongly polarizes electrons in π-orbitals, leading to substantially attractive induction terms. This polarization is so important that a cation and a benzene attract each other when placed in the same plane, even though a consideration of the electrostatic interactions alone would suggest otherwise. SAPT analysis can also support an understanding of substituent effects in π-π interactions. Trends in face-to-face sandwich benzene dimers cannot be understood solely in terms of electrostatic effects, especially for multiply substituted dimers, but SAPT analysis demonstrates the importance of London dispersion forces. Moreover, detailed SAPT studies also reveal the critical importance of charge penetration effects in π-stacking interactions. These effects arise in cases with substantial orbital overlap, such as in π-stacking in DNA or in crystal structures of π-conjugated materials. These charge penetration effects lead to attractive electrostatic terms where a simpler analysis based on atom-centered charges, electrostatic potential plots, or even distributed multipole analysis would incorrectly predict repulsive electrostatics. SAPT analysis of sandwich benzene, benzene-pyridine, and pyridine dimers indicates that dipole/induced-dipole terms present in benzene-pyridine but not in benzene dimer are relatively

  11. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  12. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  13. Principal Components Analysis of Population Admixture

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    With the availability of high-density genotype information, principal components analysis (PCA) is now routinely used to detect and quantify the genetic structure of populations in both population genetics and genetic epidemiology. An important issue is how to make appropriate and correct inferences about population relationships from the results of PCA, especially when admixed individuals are included in the analysis. We extend our recently developed theoretical formulation of PCA to allow for admixed populations. Because the sampled individuals are treated as features, our generalized formulation of PCA directly relates the pattern of the scatter plot of the top eigenvectors to the admixture proportions and parameters reflecting the population relationships, and thus can provide valuable guidance on how to properly interpret the results of PCA in practice. Using our formulation, we theoretically justify the diagnostic of two-way admixture. More importantly, our theoretical investigations based on the proposed formulation yield a diagnostic of multi-way admixture. For instance, we found that admixed individuals with three parental populations are distributed inside the triangle formed by their parental populations and divide the triangle into three smaller triangles whose areas have the same proportions in the big triangle as the corresponding admixture proportions. We tested and illustrated these findings using simulated data and data from HapMap III and the Human Genome Diversity Project. PMID:22808102

  14. Application of new methodologies based on design of experiments, independent component analysis and design space for robust optimization in liquid chromatography.

    PubMed

    Debrus, Benjamin; Lebrun, Pierre; Ceccato, Attilio; Caliaro, Gabriel; Rozet, Eric; Nistor, Iolanda; Oprean, Radu; Rupérez, Francisco J; Barbas, Coral; Boulanger, Bruno; Hubert, Philippe

    2011-04-01

    HPLC separations of an unknown sample mixture and a pharmaceutical formulation have been optimized using a recently developed chemometric methodology proposed by W. Dewé et al. in 2004 and improved by P. Lebrun et al. in 2008. This methodology is based on experimental designs which are used to model retention times of compounds of interest. Then, the prediction accuracy and the optimal separation robustness, including the uncertainty study, were evaluated. Finally, the design space (ICH Q8(R1) guideline) was computed as the probability for a criterion to lie in a selected range of acceptance. Furthermore, the chromatograms were automatically read. Peak detection and peak matching were carried out with a previously developed methodology using independent component analysis published by B. Debrus et al. in 2009. The present successful applications strengthen the high potential of these methodologies for the automated development of chromatographic methods. PMID:21458628

  15. Wavefront aberration measurement method for a hyper-NA lithographic projection lens based on principal component analysis of an aerial image.

    PubMed

    Zhu, Boer; Wang, Xiangzhao; Li, Sikun; Yan, Guanyong; Shen, Lina; Duan, Lifeng

    2016-04-20

    A wavefront aberration measurement method for a hyper-NA lithographic projection lens by use of an aerial image based on principal component analysis is proposed. Aerial images of the hyper-NA lithographic projection lens are expressed accurately by using polarized light and a vector imaging model, as well as by considering the polarization properties. As a result, the wavefront aberrations of the hyper-NA lithographic projection lens are measured accurately. The lithographic simulator PROLITH is used to validate the accuracies of the wavefront aberration measurement and analyze the impact of the polarization rotation of illumination on the accuracy of the wavefront aberration measurement, as well as the degree of polarized light and the sample interval of aerial images. The result shows that the proposed method can retrieve 33 terms of Zernike coefficients (Z5-Z37) with a maximum error of less than 0.00085λ. PMID:27140087

  16. Identification of Tea Storage Times by Linear Discrimination Analysis and Back-Propagation Neural Network Techniques Based on the Eigenvalues of Principal Components Analysis of E-Nose Sensor Signals

    PubMed Central

    Yu, Huichun; Wang, Yongwei; Wang, Jun

    2009-01-01

    An electronic nose (E-nose) was employed to detect the aroma of green tea after different storage times. Longjing green tea dry leaves, beverages and residues were detected with an E-nose, respectively. In order to decrease the data dimensionality and optimize the feature vector, the E-nose sensor response data were analyzed by principal components analysis (PCA) and the five main principal components values were extracted as the input for the discrimination analysis. The storage time (0, 60, 120, 180 and 240 days) was better discriminated by linear discrimination analysis (LDA) and was predicted by the back-propagation neural network (BPNN) method. The results showed that the discrimination and testing results based on the tea leaves were better than those based on tea beverages and tea residues. The mean errors of the tea leaf data were 9, 2.73, 3.93, 6.33 and 6.8 days, respectively. PMID:22408494

  17. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  18. The Analysis of Multitrait-Multimethod Matrices via Constrained Components Analysis.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1996-01-01

    An approach to the analysis of multitrait-multimethod matrices is proposed in which improper solutions are ruled out and convergence is guaranteed. The approach, based on constrained variants of components analysis, provides component scores that can relate components to external variables. It is illustrated through simulated and empirical data.…

  19. Identification of More Feasible MicroRNA–mRNA Interactions within Multiple Cancers Using Principal Component Analysis Based Unsupervised Feature Extraction

    PubMed Central

    Taguchi, Y-h.

    2016-01-01

    MicroRNA(miRNA)–mRNA interactions are important for understanding many biological processes, including development, differentiation and disease progression, but their identification is highly context-dependent. When computationally derived from sequence information alone, the identification should be verified by integrated analyses of mRNA and miRNA expression. The drawback of this strategy is the vast number of identified interactions, which prevents an experimental or detailed investigation of each pair. In this paper, we overcome this difficulty by the recently proposed principal component analysis (PCA)-based unsupervised feature extraction (FE), which reduces the number of identified miRNA–mRNA interactions that properly discriminate between patients and healthy controls without losing biological feasibility. The approach is applied to six cancers: hepatocellular carcinoma, non-small cell lung cancer, esophageal squamous cell carcinoma, prostate cancer, colorectal/colon cancer and breast cancer. In PCA-based unsupervised FE, the significance does not depend on the number of samples (as in the standard case) but on the number of features, which approximates the number of miRNAs/mRNAs. To our knowledge, we have newly identified miRNA–mRNA interactions in multiple cancers based on a single common (universal) criterion. Moreover, the number of identified interactions was sufficiently small to be sequentially curated by literature searches. PMID:27171078

  20. Identification of More Feasible MicroRNA-mRNA Interactions within Multiple Cancers Using Principal Component Analysis Based Unsupervised Feature Extraction.

    PubMed

    Taguchi, Y-H

    2016-01-01

    MicroRNA(miRNA)-mRNA interactions are important for understanding many biological processes, including development, differentiation and disease progression, but their identification is highly context-dependent. When computationally derived from sequence information alone, the identification should be verified by integrated analyses of mRNA and miRNA expression. The drawback of this strategy is the vast number of identified interactions, which prevents an experimental or detailed investigation of each pair. In this paper, we overcome this difficulty by the recently proposed principal component analysis (PCA)-based unsupervised feature extraction (FE), which reduces the number of identified miRNA-mRNA interactions that properly discriminate between patients and healthy controls without losing biological feasibility. The approach is applied to six cancers: hepatocellular carcinoma, non-small cell lung cancer, esophageal squamous cell carcinoma, prostate cancer, colorectal/colon cancer and breast cancer. In PCA-based unsupervised FE, the significance does not depend on the number of samples (as in the standard case) but on the number of features, which approximates the number of miRNAs/mRNAs. To our knowledge, we have newly identified miRNA-mRNA interactions in multiple cancers based on a single common (universal) criterion. Moreover, the number of identified interactions was sufficiently small to be sequentially curated by literature searches. PMID:27171078

  1. Principal component analysis implementation in Java

    NASA Astrophysics Data System (ADS)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  2. Principal component analysis of phenolic acid spectra

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  3. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  4. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  5. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  6. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  7. Robust Correlated and Individual Component Analysis.

    PubMed

    Panagakis, Yannis; Nicolaou, Mihalis A; Zafeiriou, Stefanos; Pantic, Maja

    2016-08-01

    Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field. PMID:26552077

  8. Independent component analysis of parameterized ECG signals.

    PubMed

    Tanskanen, Jarno M A; Viik, Jari J; Hyttinen, Jari A K

    2006-01-01

    Independent component analysis (ICA) of measured signals yields the independent sources, given certain fulfilled requirements. Properly parameterized signals provide a better view to the considered system aspects, while reducing the amount of data. It is little acknowledged that appropriately parameterized signals may be subjected to ICA, yielding independent components (ICs) displaying more clearly the investigated properties of the sources. In this paper, we propose ICA of parameterized signals, and demonstrate the concept with ICA of ST and R parameterizations of electrocardiogram (ECG) signals from ECG exercise test measurements from two coronary artery disease (CAD) patients. PMID:17945912

  9. ARTICLES: Laser spectrochromatographic analysis of petroleum components

    NASA Astrophysics Data System (ADS)

    Korobeĭnik, G. S.; Letokhov, V. S.; Montanari, S. G.; Tumanova, L. M.

    1985-01-01

    A system combining a gas chromatograph and a laser optoacoustic spectrometer (with a CO2 laser and means for fast frequency scanning) was used to investigate model hydrocarbon mixtures, as well as some real objects in the form of benzine fractions of petroleum oil. The fast scanning regime was used to record optoacoustic spectra of hydrocarbons (in the range 9.2-10.8μ) during the travel time (1-10 sec) of the individual components of a mixture through an optoacoustic cell in the course of chromatrographic separation of these components. The spectra were used to carry out a group hydrocarbon analysis of benzine fractions of petroleum oil from various locations. The proposed method was relatively fast and was characterized by a good ability for identification of various components, compared with the usually employed method such as gas-liquid capillary chromatography.

  10. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  11. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  12. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  13. Principal components analysis of Jupiter VIMS spectra

    USGS Publications Warehouse

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  14. Failure analysis of aluminum alloy components

    NASA Technical Reports Server (NTRS)

    Johari, O.; Corvin, I.; Staschke, J.

    1973-01-01

    Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.

  15. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  16. WE-G-18C-09: Separating Perfusion and Diffusion Components From Diffusion Weighted MRI of Rectum Tumors Based On Intravoxel Incoherent Motion (IVIM) Analysis

    SciTech Connect

    Tyagi, N; Wengler, K; Mazaheri, Y; Hunt, M; Deasy, J; Gollub, M

    2014-06-15

    Purpose: Pseudodiffusion arises from the microcirculation of blood in the randomly oriented capillary network and contributes to the signal decay acquired using a multi-b value diffusion weighted (DW)-MRI sequence. This effect is more significant at low b-values and should be properly accounted for in apparent diffusion coefficient (ADC) calculations. The purpose of this study was to separate perfusion and diffusion component based on a biexponential and a segmented monoexponential model using IVIM analysis Methods. The signal attenuation is modeled as S(b) = S0[(1−f)exp(−bD) + fexp(−bD*)]. Fitting the biexponetial decay leads to the quantification of D, the true diffusion coefficient, D*, the pseudodiffusion coefficient, and f, the perfusion fraction. A nonlinear least squares fit and two segmented monoexponential models were used to derive the values for D, D*,‘and f. In the segmented approach b = 200 s/mm{sup 2} was used as the cut-off value for calculation of D. DW-MRI's of a rectum cancer patient were acquired before chemotherapy, before radiation therapy (RT), and 4 weeks into RT and were investigated as an example case. Results: Mean ADC for the tumor drawn on the DWI cases was 0.93, 1.0 and 1.13 10{sup −3}×mm{sup 2}/s before chemotherapy, before RT and 4 weeks into RT. The mean (D.10{sup −3} × mm{sup 2}/s, D* 10{sup −3} × mm{sup 2}/s, and f %) based on biexponential fit was (0.67, 18.6, and 27.2%), (0.72, 17.7, and 28.9%) and (0.83,15.1, and 30.7%) at these time points. The mean (D, D* f) based on segmented fit was (0.72, 10.5, and 12.1%), (0.72, 8.2, and 17.4%) and (.82, 8.1, 16.5%) Conclusion: ADC values are typically higher than true diffusion coefficients. For tumors with significant perfusion effect, ADC should be analyzed at higher b-values or separated from the perfusion component. Biexponential fit overestimates the perfusion fraction because of increased sensitivity to noise at low b-values.

  17. Imaging Brain Dynamics Using Independent Component Analysis

    PubMed Central

    Jung, Tzyy-Ping; Makeig, Scott; McKeown, Martin J.; Bell, Anthony J.; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain. PMID:20824156

  18. Spectral Components Analysis of Diffuse Emission Processes

    SciTech Connect

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  19. Automated resolution of chromatographic signals by independent component analysis-orthogonal signal deconvolution in comprehensive gas chromatography/mass spectrometry-based metabolomics.

    PubMed

    Domingo-Almenara, Xavier; Perera, Alexandre; Ramírez, Noelia; Brezmes, Jesus

    2016-07-01

    Comprehensive gas chromatography-mass spectrometry (GC×GC-MS) provides a different perspective in metabolomics profiling of samples. However, algorithms for GC×GC-MS data processing are needed in order to automatically process the data and extract the purest information about the compounds appearing in complex biological samples. This study shows the capability of independent component analysis-orthogonal signal deconvolution (ICA-OSD), an algorithm based on blind source separation and distributed in an R package called osd, to extract the spectra of the compounds appearing in GC×GC-MS chromatograms in an automated manner. We studied the performance of ICA-OSD by the quantification of 38 metabolites through a set of 20 Jurkat cell samples analyzed by GC×GC-MS. The quantification by ICA-OSD was compared with a supervised quantification by selective ions, and most of the R(2) coefficients of determination were in good agreement (R(2)>0.90) while up to 24 cases exhibited an excellent linear relation (R(2)>0.95). We concluded that ICA-OSD can be used to resolve co-eluted compounds in GC×GC-MS. PMID:27208528

  20. Impact of parameter fluctuations on the performance of ethanol precipitation in production of Re Du Ning Injections, based on HPLC fingerprints and principal component analysis.

    PubMed

    Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei

    2016-01-01

    The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products. PMID:26850350

  1. Collagen-based proteinaceous binder-pigment interaction study under UV ageing conditions by MALDI-TOF-MS and principal component analysis.

    PubMed

    Romero-Pastor, Julia; Navas, Natalia; Kuckova, Stepanka; Rodríguez-Navarro, Alejandro; Cardell, Carolina

    2012-03-01

    This study focuses on acquiring information on the degradation process of proteinaceous binders due to ultra violet (UV) radiation and possible interactions owing to the presence of historical mineral pigments. With this aim, three different paint model samples were prepared according to medieval recipes, using rabbit glue as proteinaceus binders. One of these model samples contained only the binder, and the other two were prepared by mixing each of the pigments (cinnabar or azurite) with the binder (glue tempera model samples). The model samples were studied by applying Principal Component Analysis (PCA) to their mass spectra obtained with Matrix-Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF-MS). The complementary use of Fourier Transform Infrared Spectroscopy to study conformational changes of secondary structure of the proteinaceous binder is also proposed. Ageing effects on the model samples after up to 3000 h of UV irradiation were periodically analyzed by the proposed approach. PCA on MS data proved capable of identifying significant changes in the model samples, and the results suggested different aging behavior based on the pigment present. This research represents the first attempt to use this approach (PCA on MALDI-TOF-MS data) in the field of Cultural Heritage and demonstrates the potential benefits in the study of proteinaceous artistic materials for purposes of conservation and restoration. PMID:22431458

  2. Principal component analysis for designed experiments

    PubMed Central

    2015-01-01

    Background Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. Results The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the

  3. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  4. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  5. On-board energy management for high-speed aerospace vehicles: System and component-level energy-based optimization and analysis

    NASA Astrophysics Data System (ADS)

    Taylor, Trent Matthew

    This dissertation addresses in detail three main topics for advancing the state-of-the-art in hypersonic aerospace systems: (1) the development of a synergistic method based on entropy generation in order to analyze, evaluate, and optimize vehicle performance, (2) the development and analysis of innovative unconventional flow-control methods for increasing vehicle performance utilizing entropy generation as a fundamental descriptor and predictor of performance, and (3) an investigation of issues arising when evaluating (predicting) actual flight vehicle performance using ground test facilities. Vehicle performance is analyzed beginning from fundamental considerations involving fluid and thermodynamic balance relationships. The results enable the use of entropy generation as the true "common currency" (single loss parameter) for systematic and consistent evaluation of performance losses across the vehicle as an integrated system. Innovative flow control methods are modeled using state of the art CFD codes in which the flow is energized in targeted local zones with emphasis on shock wave modification. Substantial drag reductions are observed such that drag can decrease to 25% of the baseline. Full vehicle studies are then conducted by comparing traditional and flow-controlled designs and very similar axial force is found with an accompanying increase in lift for the flow-control design to account for on-board energy-addition components. Finally, a full engine flowpath configuration is designed for computational studies of ground test performance versus actual flight performance with emphasis on understanding the effect of ground-based vitiate (test contaminant). It is observed that the presence of vitiate in the test medium can also have a significant first-order effect on ignition delay as well as the thermodynamic response to a given heat release in the fuel.

  6. Sparse principal component analysis in cancer research

    PubMed Central

    Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa

    2015-01-01

    A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835

  7. [Rapid identification of crude and sweated dipsaci radix based on near-infrared spectroscopy combined with principal component analysis-mahalanobis distance].

    PubMed

    Du, Wei-Feng; Jia, Yong-Qiang; Jiang, Dong-Jing; Zhang, Hao

    2014-12-01

    In order to discriminate the crude and sweated Dipsaci Radix correctly and rapidly, the crude and sweated Dipsaci Radix were scanned by the NIR spectrometer, and an identifying model was developed by near infrared spectroscopy combined with principal component-Mahalanobis distance pattern recognition method. The pretreated spectra data of 129 crude samples and 86 sweated ones were analyzed through principal component analysis (PCA). The identifying model was developed by choosing the spectrum for 9 881.46-4 119.20 cm(-1) and "SNV + spectrum + S-G" to the original spectral preprocessing with 14 principal components, and then was verified by prediction set, identifying with 100% accuracy. The rapid identification model of the crude and sweated Dipsaci Radix by NIR is feasible and efficient, and could be used as an assistant means for identifying the crude and sweated Dipsaci Radix. PMID:25911809

  8. Structural Analysis Methods Development for Turbine Hot Section Components

    NASA Technical Reports Server (NTRS)

    Thompson, Robert L.

    1988-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine Hot Section Technology (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  9. Practical Issues in Component Aging Analysis

    SciTech Connect

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  10. BBH Classification Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Cadonati, Laura; Clark, James; Day, Brian; Jeng, Ik Siong; Lombardi, Alexander; London, Lionel; Mangini, Nicholas; Logue, Josh

    2015-04-01

    Binary black holes will inspiral, merge and ringdown in the LIGO/VIRGO band for an interesting range of total masses. We present an update on our approach of using Principal Component Analysis to build models of NR BBH waveforms that focus on the merger for generic BBH signals. These models are intended to be used to conduct coarse parameter estimation for gravitational wave burst candidate events. The proposed benefit is a fast, optimized catalog that classifies bulk features in the signal. NSFPHY-0955773, 0955825, SUPA and STFC UK. Simulations by NSF XSEDE PHY120016 and PHY090030.

  11. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  12. Analysis of exogenous components of mortality risks.

    PubMed

    Blinkin, V L

    1998-04-01

    A new technique for deriving exogenous components of mortality risks from national vital statistics has been developed. Each observed death rate Dij (where i corresponds to calendar time (year or interval of years) and j denotes the number of corresponding age group) was represented as Dij = Aj + BiCj, and unknown quantities Aj, Bi, and Cj were estimated by a special procedure using the least-squares principle. The coefficients of variation do not exceed 10%. It is shown that the term Aj can be interpreted as the endogenous and the second term BiCj as the exogenous components of the death rate. The aggregate of endogenous components Aj can be described by a regression function, corresponding to the Gompertz-Makeham law, A(tau) = gamma + beta x e alpha tau, where gamma, beta, and alpha are constants, tau is age, A(tau) [symbol: see text] tau = tau j identical to A(tau j) identical to Aj and tau j is the value of age tau in jth age group. The coefficients of variation for such a representation does not exceed 4%. An analysis of exogenous risk levels in the Moscow and Russian populations during 1980-1995 shows that since 1992 all components of exogenous risk in the Moscow population had been increasing up to 1994. The greatest contribution to the total level of exogenous risk was lethal diseases, and their death rate was 387 deaths per 100,000 persons in 1994, i.e., 61.9% of all deaths. The dynamics of exogenous mortality risk change during 1990-1994 in the Moscow population and in the Russian population without Moscow had been identical: the risk had been increasing and its value in the Russian population had been higher than that in the Moscow population. PMID:9637078

  13. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  14. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle

  15. Analysis of Variance Components for Genetic Markers with Unphased Genotypes

    PubMed Central

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297

  16. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-02-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in southern Ontario (ON), Canada, were used to evaluate Environment Canada's regional chemical transport model predictions of primary organic aerosol (POA). Environment Canada's operational numerical weather prediction model and the 2006 Canadian and 2005 US national emissions inventories were used as input to the chemical transport model (named AURAMS). Particle-component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. Co-located carbon monoxide (CO), PM2.5 black carbon (BC), and PM1 SO4 measurements were also used for evaluation and interpretation, permitting a detailed diagnostic model evaluation. At the urban site, good agreement was observed for the comparison of daytime campaign PM1 POA and HOA mean values: 1.1 μg m-3 vs. 1.2 μg m-3, respectively. However, a POA overprediction was evident on calm nights due to an overly-stable model surface layer. Biases in model POA predictions trended from positive to negative with increasing HOA values. This trend has several possible explanations, including (1) underweighting of urban locations in particulate matter (PM) spatial surrogate fields, (2) overly-coarse model grid spacing for resolving urban-scale sources, and (3) lack of a model particle POA evaporation process during dilution of vehicular POA tail-pipe emissions to urban scales. Furthermore, a trend in POA bias was observed at the urban site as a function of the BC/HOA ratio, suggesting a possible association of POA underprediction for diesel combustion sources. For several time periods, POA overprediction was also observed for sulphate-rich plumes, suggesting that our model POA fractions for the PM2.5 chemical speciation profiles may be too high for these point sources. At the rural Harrow site

  17. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-09-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA). Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species. At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m-3 and 1.2 μg m-3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer. This model behaviour can be explained by a combination of model under

  18. Seismic component fragility data base for IPEEE

    SciTech Connect

    Bandyopadhyay, K.; Hofmayer, C.

    1990-01-01

    Seismic probabilistic risk assessment or a seismic margin study will require a reliable data base of seismic fragility of various equipment classes. Brookhaven National Laboratory (BNL) has selected a group of equipment and generically evaluated the seismic fragility of each equipment class by use of existing test data. This paper briefly discusses the evaluation methodology and the fragility results. The fragility analysis results when used in the Individual Plant Examination for External Events (IPEEE) Program for nuclear power plants are expected to provide insights into seismic vulnerabilities of equipment for earthquakes beyond the design basis. 3 refs., 1 fig., 1 tab.

  19. A structured overview of simultaneous component based data integration

    PubMed Central

    Van Deun, Katrijn; Smilde, Age K; van der Werf, Mariët J; Kiers, Henk AL; Van Mechelen, Iven

    2009-01-01

    Background Data integration is currently one of the main challenges in the biomedical sciences. Often different pieces of information are gathered on the same set of entities (e.g., tissues, culture samples, biomolecules) with the different pieces stemming, for example, from different measurement techniques. This implies that more and more data appear that consist of two or more data arrays that have a shared mode. An integrative analysis of such coupled data should be based on a simultaneous analysis of all data arrays. In this respect, the family of simultaneous component methods (e.g., SUM-PCA, unrestricted PCovR, MFA, STATIS, and SCA-P) is a natural choice. Yet, different simultaneous component methods may lead to quite different results. Results We offer a structured overview of simultaneous component methods that frames them in a principal components setting such that both the common core of the methods and the specific elements with regard to which they differ are highlighted. An overview of principles is given that may guide the data analyst in choosing an appropriate simultaneous component method. Several theoretical and practical issues are illustrated with an empirical example on metabolomics data for Escherichia coli as obtained with different analytical chemical measurement methods. Conclusion Of the aspects in which the simultaneous component methods differ, pre-processing and weighting are consequential. Especially, the type of weighting of the different matrices is essential for simultaneous component analysis. These types are shown to be linked to different specifications of the idea of a fair integration of the different coupled arrays. PMID:19671149

  20. Quantitative Analysis of PMLA Nanoconjugate Components after Backbone Cleavage

    PubMed Central

    Ding, Hui; Patil, Rameshwar; Portilla-Arias, Jose; Black, Keith L.; Ljubimova, Julia Y.; Holler, Eggehard

    2015-01-01

    Multifunctional polymer nanoconjugates containing multiple components show great promise in cancer therapy, but in most cases complete analysis of each component is difficult. Polymalic acid (PMLA) based nanoconjugates have demonstrated successful brain and breast cancer treatment. They consist of multiple components including targeting antibodies, Morpholino antisense oligonucleotides (AONs), and endosome escape moieties. The component analysis of PMLA nanoconjugates is extremely difficult using conventional spectrometry and HPLC method. Taking advantage of the nature of polyester of PMLA, which can be cleaved by ammonium hydroxide, we describe a method to analyze the content of antibody and AON within nanoconjugates simultaneously using SEC-HPLC by selectively cleaving the PMLA backbone. The selected cleavage conditions only degrade PMLA without affecting the integrity and biological activity of the antibody. Although the amount of antibody could also be determined using the bicinchoninic acid (BCA) method, our selective cleavage method gives more reliable results and is more powerful. Our approach provides a new direction for the component analysis of polymer nanoconjugates and nanoparticles. PMID:25894227

  1. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106

  2. Meta-Analysis of Mathematic Basic-Fact Fluency Interventions: A Component Analysis

    ERIC Educational Resources Information Center

    Codding, Robin S.; Burns, Matthew K.; Lukito, Gracia

    2011-01-01

    Mathematics fluency is a critical component of mathematics learning yet few attempts have been made to synthesize this research base. Seventeen single-case design studies with 55 participants were reviewed using meta-analytic procedures. A component analysis of practice elements was conducted and treatment intensity and feasibility were examined.…

  3. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  4. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  5. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  6. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with

  7. Application of Principal Component Analysis to EUV multilayer defect printing

    NASA Astrophysics Data System (ADS)

    Xu, Dongbo; Evanschitzky, Peter; Erdmann, Andreas

    2015-09-01

    This paper proposes a new method for the characterization of multilayer defects on EUV masks. To reconstruct the defect geometry parameters from the intensity and phase of a defect, the Principal Component Analysis (PCA) is employed to parametrize the intensity and phase distributions into principal component coefficients. In order to construct the base functions of PCA, a combination of a reference multilayer defect and appropriate pupil filters is introduced to obtain the designed sets of intensity and phase distributions. Finally, an Artificial Neural Network (ANN) is applied to correlate the principal component coefficients of the intensity and the phase of the defect with the defect geometry parameters and to reconstruct the unknown defect geometry parameters.

  8. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  9. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  10. Derivative component analysis for mass spectral serum proteomic profiles

    PubMed Central

    2014-01-01

    Background As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. Methods In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Results Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based

  11. Component design bases - A template approach

    SciTech Connect

    Pabst, L.F. ); Strickland, K.M. )

    1991-01-01

    A well-documented nuclear plant design basis can enhance plant safety and availability. Older plants, however, often lack historical evidence of the original design intent, particularly for individual components. Most plant documentation describes the actual design (what is) rather than the bounding limits of the design. Without knowledge of these design limits, information from system descriptions and equipment specifications is often interpreted as inviolate design requirements. Such interpretations may lead to unnecessary design conservatism in plant modifications and unnecessary restrictions on plant operation. In 1986, Florida Power and Light Company's (FP and L's) Turkey Point plant embarked on one of the first design basis reconstitution programs in the United States to catalog the true design requirements. As the program developed, design basis users expressed a need for additional information at the component level. This paper outlines a structured (template) approach to develop useful component design basis information (including the WHYs behind the design).

  12. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  13. Analysis of failed nuclear plant components

    NASA Astrophysics Data System (ADS)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  14. Effect of the components' interface on the synthesis of methanol over Cu/ZnO from CO2/H2: a microkinetic analysis based on DFT + U calculations.

    PubMed

    Tang, Qian-Lin; Zou, Wen-Tian; Huang, Run-Kun; Wang, Qi; Duan, Xiao-Xuan

    2015-03-21

    The elucidation of chemical reactions occurring on composite systems (e.g., copper (Cu)/zincite (ZnO)) from first principles is a challenging task because of their very large sizes and complicated equilibrium geometries. By combining the density functional theory plus U (DFT + U) method with microkinetic modeling, the present study has investigated the role of the phase boundary in CO2 hydrogenation to methanol over Cu/ZnO. The absence of hydrogenation locations created by the interface between the two catalyst components was revealed based on the calculated turnover frequency under realistic conditions, in which the importance of interfacial copper to provide spillover hydrogen for remote Cu(111) sites was stressed. Coupled with the fact that methanol production on the binary catalyst was recently believed to predominantly involve the bulk metallic surface, the spillover of interface hydrogen atoms onto Cu(111) facets facilitates the production process. The cooperative influence of the two different kinds of copper sites can be rationalized applying the Brönsted-Evans-Polanyi (BEP) relationship and allows us to find that the catalytic activity of ZnO-supported Cu catalysts is of volcano type with decrease in the particle size. Our results here may have useful implications in the future design of new Cu/ZnO-based materials for CO2 transformation to methanol. PMID:25697118

  15. Using surface electromyography (SEMG) to classify low back pain based on lifting capacity evaluation with principal component analysis neural network method.

    PubMed

    Hung, Chia-Chun; Shen, Tsu-Wang; Liang, Chung-Chao; Wu, Wen-Tien

    2014-01-01

    Low back pain (LBP) is a leading cause of disability. The population with low back pain is continuously growing in the recent years. This study tries to distinguish LBP patients with healthy subjects by using the objective surface electromyography (SEMG) as a quantitative score for clinical evaluations. There are 26 healthy and 26 low back pain subjects who involved in this research. They lifted different weights by static and dynamic lifting process. Multiple features are extracted from the raw SEMG data, including energy and frequency indexes. Moreover, false discovery rate (FDR) omitted the false positive features. Then, a principal component analysis neural network (PCANN) was used for classifications. The results showed the features with different loadings (including 30%, and 50% loading) on lifting which can be used for distinguishing healthy and back pain subjects. By using PCANN method, more than 80% accuracies are achieved when different lifting weights were applied. Moreover, it is correlated between some EMG features and clinical scales, on exertion, fatigue, and pain. This technology can be potentially used for the future researches as a computer-aid diagnosis tool of LBP evaluation. PMID:25569886

  16. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  17. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children

    PubMed Central

    Wassenburg, Stephanie I.; de Koning, Björn B.; de Vries, Meinou H.; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  18. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  19. EXAFS and principal component analysis : a new shell game.

    SciTech Connect

    Wasserman, S.

    1998-10-28

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions.

  20. Components based on magneto-elastic phenomena

    NASA Astrophysics Data System (ADS)

    Deckert, J.

    1982-01-01

    Magnetoelastic materials and their applications in magnetostrictive oscillators, electromechanical filters and delay lines are discussed. The properties of commercial magnetoelastic materials are tabulated. The performance of magnetostrictive, piezoelectric, and quartz oscillators are compared. The development of low cost quartz and piezoelectric materials reduces the significance of magnetostrictive components. Uses are restricted to springs and diaphragms, e.g. in clocks or control engineering, temperature independent resonators, and vibrational systems.

  1. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  2. Multi-component joint analysis of surface waves

    NASA Astrophysics Data System (ADS)

    Dal Moro, Giancarlo; Moura, Rui Miguel Marques; Moustafa, Sayed S. R.

    2015-08-01

    Propagation of surface waves can occur with complex energy distribution amongst the various modes. It is shown that even simple VS (shear-wave velocity) profiles can generate velocity spectra that, because of a complex mode excitation, can be quite difficult to interpret in terms of modal dispersion curves. In some cases, Rayleigh waves show relevant differences depending on the considered component (radial or vertical) and the kind of source (vertical impact or explosive). Contrary to several simplistic assumptions often proposed, it is shown, both via synthetic and field datasets, that the fundamental mode of Rayleigh waves can be almost completely absent. This sort of evidence demonstrates the importance of a multi-component analysis capable of providing the necessary elements to properly interpret the data and adequately constrain the subsurface model. It is purposely shown, also through the sole use of horizontal geophones, how it can be possible to efficiently and quickly acquire both Love and Rayleigh (radial-component) waves. The presented field dataset reports a case where Rayleigh waves (both their vertical and radial components) appear largely dominated by higher modes with little or no evidence of the fundamental mode. The joint inversion of the radial and vertical components of Rayleigh waves jointly with Love waves is performed by adopting a multi-objective inversion scheme based on the computation of synthetic seismograms for the three considered components and the minimization of the whole velocity spectra misfits (Full Velocity Spectra - FVS - inversion). Such a FVS multi-component joint inversion can better handle complex velocity spectra thus providing a more robust subsurface model not affected by erroneous velocity spectra interpretations and non-uniqueness of the solution.

  3. Primary component analysis method and reduction of seismicity parameters

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Ma, Qin-Zhong; Lin, Ming-Zhou; Wu, Geng-Feng; Wu, Shao-Chun

    2005-09-01

    In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (M l≥3.0), b-value, η-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However, the synthesis parameter W showed obvious anomalies before 13 earthquakes (M S≥5.8) occurred in North China, which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.

  4. Multibody model reduction by component mode synthesis and component cost analysis

    NASA Technical Reports Server (NTRS)

    Spanos, J. T.; Mingori, D. L.

    1990-01-01

    The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.

  5. Fast independent component analysis algorithm for quaternion valued signals.

    PubMed

    Javidi, Soroush; Took, Clive Cheong; Mandic, Danilo P

    2011-12-01

    An extension of the fast independent component analysis algorithm is proposed for the blind separation of both Q-proper and Q-improper quaternion-valued signals. This is achieved by maximizing a negentropy-based cost function, and is derived rigorously using the recently developed HR calculus in order to implement Newton optimization in the augmented quaternion statistics framework. It is shown that the use of augmented statistics and the associated widely linear modeling provides theoretical and practical advantages when dealing with general quaternion signals with noncircular (rotation-dependent) distributions. Simulations using both benchmark and real-world quaternion-valued signals support the approach. PMID:22027374

  6. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  7. Columbia River Component Data Gap Analysis

    SciTech Connect

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  8. Si-based RF MEMS components.

    SciTech Connect

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  9. Core Bioactive Components Promoting Blood Circulation in the Traditional Chinese Medicine Compound Xueshuantong Capsule (CXC) Based on the Relevance Analysis between Chemical HPLC Fingerprint and In Vivo Biological Effects

    PubMed Central

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  10. Interaction Analysis of a Two-Component System Using Nanodiscs

    PubMed Central

    Hörnschemeyer, Patrick; Liss, Viktoria; Heermann, Ralf; Jung, Kirsten; Hunke, Sabine

    2016-01-01

    Two-component systems are the major means by which bacteria couple adaptation to environmental changes. All utilize a phosphorylation cascade from a histidine kinase to a response regulator, and some also employ an accessory protein. The system-wide signaling fidelity of two-component systems is based on preferential binding between the signaling proteins. However, information on the interaction kinetics between membrane embedded histidine kinase and its partner proteins is lacking. Here, we report the first analysis of the interactions between the full-length membrane-bound histidine kinase CpxA, which was reconstituted in nanodiscs, and its cognate response regulator CpxR and accessory protein CpxP. Using surface plasmon resonance spectroscopy in combination with interaction map analysis, the affinity of membrane-embedded CpxA for CpxR was quantified, and found to increase by tenfold in the presence of ATP, suggesting that a considerable portion of phosphorylated CpxR might be stably associated with CpxA in vivo. Using microscale thermophoresis, the affinity between CpxA in nanodiscs and CpxP was determined to be substantially lower than that between CpxA and CpxR. Taken together, the quantitative interaction data extend our understanding of the signal transduction mechanism used by two-component systems. PMID:26882435