Science.gov

Sample records for component analysis based

  1. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  2. Likelihood-based population independent component analysis

    PubMed Central

    Eloyan, Ani; Crainiceanu, Ciprian M.; Caffo, Brian S.

    2013-01-01

    Independent component analysis (ICA) is a widely used technique for blind source separation, used heavily in several scientific research areas including acoustics, electrophysiology, and functional neuroimaging. We propose a scalable two-stage iterative true group ICA methodology for analyzing population level functional magnetic resonance imaging (fMRI) data where the number of subjects is very large. The method is based on likelihood estimators of the underlying source densities and the mixing matrix. As opposed to many commonly used group ICA algorithms, the proposed method does not require significant data reduction by a 2-fold singular value decomposition. In addition, the method can be applied to a large group of subjects since the memory requirements are not restrictive. The performance of our approach is compared with a commonly used group ICA algorithm via simulation studies. Furthermore, the proposed method is applied to a large collection of resting state fMRI datasets. The results show that established brain networks are well recovered by the proposed algorithm. PMID:23314416

  3. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  4. Background Subtraction Approach based on Independent Component Analysis

    PubMed Central

    Jiménez-Hernández, Hugo

    2010-01-01

    In this work, a new approach to background subtraction based on independent component analysis is presented. This approach assumes that background and foreground information are mixed in a given sequence of images. Then, foreground and background components are identified, if their probability density functions are separable from a mixed space. Afterwards, the components estimation process consists in calculating an unmixed matrix. The estimation of an unmixed matrix is based on a fast ICA algorithm, which is estimated as a Newton-Raphson maximization approach. Next, the motion components are represented by the mid-significant eigenvalues from the unmixed matrix. Finally, the results show the approach capabilities to detect efficiently motion in outdoors and indoors scenarios. The results show that the approach is robust to luminance conditions changes at scene. PMID:22219704

  5. Phase-shifting interferometry based on principal component analysis.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-04-15

    An asynchronous phase-shifting method based on principal component analysis (PCA) is presented. No restrictions about the background, modulation, and phase shifts are necessary. The presented method is very fast and needs very low computational requirements, so it can be used with very large images and/or very large image sets. The method is based on obtaining two quadrature signals by the PCA algorithm. We have applied the proposed method to simulated and experimental interferograms, obtaining satisfactory results.

  6. Independent component analysis based two-step phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Shou, Junwei; Lu, Xiaoxu; Yin, Zhenxing; Tian, Jindong; Li, Dong; Zhong, Liyun

    2016-10-01

    Based on the independent component analysis (ICA), we achieve phase retrieval from two-frame phase-shifting interferograms with unknown phase shifts. First, we remove the background of interferogram with a Gaussian high-pass filter. Second, the background-removed interferograms are decomposed into a group of mutual independent components through performing the pixel position recombination of an interferogram. Third, the phase shifts and the measured phase can be retrieved with high accuracy from the ratio of independent components. Compared with the existing two-step phase retrieval algorithms, both the simulation calculation and experimental result show that the proposed ICA based two-step algorithm reveals the advantage in the accuracy improvement of phase retrieval.

  7. Enhancement of textural differences based on morphological component analysis.

    PubMed

    Chi, Jianning; Eramian, Mark

    2015-09-01

    This paper proposes a new texture enhancement method which uses an image decomposition that allows different visual characteristics of textures to be represented by separate components in contrast with previous methods which either enhance texture indirectly or represent all texture information using a single image component. Our method is intended to be used as a preprocessing step prior to the use of texture-based image segmentation algorithms. Our method uses a modification of morphological component analysis (MCA) which allows texture to be separated into multiple morphological components each representing a different visual characteristic of texture. We select four such texture characteristics and propose new dictionaries to extract these components using MCA. We then propose procedures for modifying each texture component and recombining them to produce a texture-enhanced image. We applied our method as a preprocessing step prior to a number of texture-based segmentation methods and compared the accuracy of the results, finding that our method produced results superior to comparator methods for all segmentation algorithms tested. We also demonstrate by example the main mechanism by which our method produces superior results, namely that it causes the clusters of local texture features of each distinct image texture to mutually diverge within the multidimensional feature space to a vastly superior degree versus the comparator enhancement methods. PMID:25935032

  8. Filterbank-based independent component analysis for acoustic mixtures

    NASA Astrophysics Data System (ADS)

    Park, Hyung-Min

    2011-06-01

    Independent component analysis (ICA) for acoustic mixtures has been a challenging problem due to very complex reverberation involved in real-world mixing environments. In an effort to overcome disadvantages of the conventional time domain and frequency domain approaches, this paper describes filterbank-based independent component analysis for acoustic mixtures. In this approach, input signals are split into subband signals and decimated. A simplified network performs ICA on the decimated signals, and finally independent components are synthesized. First, a uniform filterbank is employed in the approach for basic and simple derivation and implementation. The uniform-filterbank-based approach achieves better separation performance than the frequency domain approach and gives faster convergence speed with less computational complexity than the time domain approach. Since most of natural signals have exponentially or more steeply decreasing energy as the frequency increases, the spectral characteristics of natural signals introduce a Bark-scale filterbank which divides low frequency region minutely and high frequency region widely. The Bark-scale-filterbank-based approach shows faster convergence speed than the uniform-filterbank-based one because it has more whitened inputs in low frequency subbands. It also improves separation performance as it has enough data to train adaptive parameters exactly in high frequency subbands.

  9. Analysis of behavioral requirements for component-based machine controllers

    NASA Astrophysics Data System (ADS)

    Proctor, Frederick M.; Michaloski, John L.; Birla, Sushil; Weinert, George F.

    2001-02-01

    Machine controllers built from standardized software components have the greatest potential to reap open architecture benefits--including plug-and-play, reusability and extensibility. A challenge to component-based controllers relates to standardizing behavior in a non- restrictive manner to accommodate component packaging and component integration. Control component packaging requires behavior to be dependable, well-defined, and well-understood among a variety of users to help ensure the reusability of the component, the reliability of the component, and the correctness of the system built using the component. Integration of control components requires that the behavior model is consistent not just within a single component, but across all components in a system so that the components interoperate correctly. At the same time, the component behavioral model must be reasonably flexible to accommodate all behavioral situations and not be restrictive to a single programming methodology. Further, not all the behavior in the system may be pre-packaged as part of a component. Thus, another issue is the suitability of the standard behavior model for programming and integration of new control logic. Ideally, we need a vendor-neutral, tool-neutral, controller- neural behavior model to allow the export/import of any and all types of control logic programs. This paper will analyze the requirements of component-based, machine controller behavior, then offer a refinement of a Finite State Machine as the basis of a behavior model to satisfy these requirements. Examples will be presented based on the behavioral model the efforts of the Open, Modular, Architecture Controller User's Group Application Programming Interface for standardized, interchangeable machine controller components.

  10. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  11. Independent component analysis-based classification of Alzheimer's MRI data

    PubMed Central

    Yang, Wenlu; Lui, Ronald L.M.; Gao, Jia-Hong; Chan, Tony F.; Yau, Shing-Tung; Sperling, Reisa A.; Huang, Xudong

    2013-01-01

    There is an unmet medical need to identify neuroimaging biomarkers that is able to accurately diagnose and monitor Alzheimer's disease (AD) at very early stages and assess the response to AD-modifying therapies. To a certain extent, volumetric and functional magnetic resonance imaging (fMRI) studies can detect changes in structure, cerebral blood flow and blood oxygenation that are able to distinguish AD and mild cognitive impairment (MCI) subjects from normal controls. However, it has been challenging to use fully automated MRI analytic methods to identify potential AD neuroimaging biomarkers. We have thus proposed a method based on independent component analysis (ICA), for studying potential AD-related MR image features, coupled with the use of support vector machine (SVM) for classifying scans into categories of AD, MCI, and normal control (NC) subjects. The MRI data were selected from Open Access Series of Imaging Studies (OASIS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI) databases. The experimental results showed that our ICA-based method can differentiate AD and MCI subjects from normal controls, although further methodological improvement in the analytic method and inclusion of additional variables may be required for optimal classification. PMID:21321398

  12. Principal component analysis based methodology to distinguish protein SERS spectra

    NASA Astrophysics Data System (ADS)

    Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.

    2011-05-01

    Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.

  13. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  14. Task-irrelevant alpha component analysis in motor imagery based brain computer interface.

    PubMed

    Lou, Bin; Hong, Bo; Gao, Shangkai

    2008-01-01

    In motor imagery based BCI, the alpha rhythm shares the same frequency band with sensorimotor rhythm (SMR), and does not correlate with mental task, which contaminates the SMR recording. Independent component analysis (ICA) was applied to decompose original EEG signal into source components, and a comprehensive method was proposed to discriminate those source components by combining temporal, frequency, spatial, and class label information. Task-irrelevant alpha components were sorted out and their projections were reduced by proper bipolar electrode placement for improving the BCI performance.

  15. Remote sensing image fusion method based on multiscale morphological component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Ni, Mengying; Zhang, Yanjie; Tong, Xiangrong; Zheng, Qiang; Liu, Jinglei

    2016-04-01

    A remote sensing image (RSI) fusion method based on multiscale morphological component analysis (m-MCA) is presented. Our contribution describes a new multiscale sparse image decomposition algorithm called m-MCA, which we apply to RSI fusion. Building on MCA, m-MCA combines curvelet transform bases and local discrete cosine transform bases to build a multiscale decomposition dictionary, and controls the entries of the dictionary to decompose the image into texture components and cartoon components with different scales. The effective scale texture component of high-resolution RSI and the cartoon component of multispectral RSI are selected to reconstruct the fusion image. Compared with state-of-the-art fusion methods, the proposed fusion method obtains higher spatial resolution and lower spectral distortion with reduced computation load in numerical experiments.

  16. Gabor feature-based apple quality inspection using kernel principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Automated inspection of apple quality involves computer recognition of good apples and blemished apples based on geometric or statistical features derived from apple images. This paper introduces a Gabor feature-based kernel, principal component analysis (PCA) method; by combining Gabor wavelet rep...

  17. Reduction of a collisional-radiative mechanism for argon plasma based on principal component analysis

    SciTech Connect

    Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.

    2015-06-15

    This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.

  18. Impact factor analysis of mixture spectra unmixing based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Shengbo; Guo, Xulin; Zhou, Chao

    2016-01-01

    Based on spectral independence of different materials, independent component analysis (ICA), a blind source separation technique, can be applied to separate mixed hyperspectral signals. For the purpose of detecting objects on the sea and improving the precision of target recognition, an original ICA method is applied by analyzing the influence exerted by spectral features of different materials and mixture materials on spectral unmixing results. Due to the complexity of targets on the sea, several measured spectra of different materials have been mixed with water spectra to simulate mixed spectra for mixture spectra decomposition. Synthetic mixed spectra are generated by linear combinations of different materials and water spectra to obtain separated results. We then compared the separated results with the measured spectra of each endmember by coefficient of determination. We conclude that these factors that will change the original spectral characteristics of Gaussian distribution have significant influence on the separated results and selecting a proper initial matrix, and processing spectral data with lower noise can help improve the ICA method for more accurate separated results from hyperspectral data.

  19. Batch process monitoring based on multiple-phase online sorting principal component analysis.

    PubMed

    Lv, Zhaomin; Yan, Xuefeng; Jiang, Qingchao

    2016-09-01

    Existing phase-based batch or fed-batch process monitoring strategies generally have two problems: (1) phase number, which is difficult to determine, and (2) uneven length feature of data. In this study, a multiple-phase online sorting principal component analysis modeling strategy (MPOSPCA) is proposed to monitor multiple-phase batch processes online. Based on all batches of off-line normal data, a new multiple-phase partition algorithm is proposed, where k-means and a defined average Euclidean radius are employed to determine the multiple-phase data set and phase number. Principal component analysis is then applied to build the model in each phase, and all the components are retained. In online monitoring, the Euclidean distance is used to select the monitoring model. All the components undergo online sorting through a parameter defined by Bayesian inference (BI). The first several components are retained to calculate the T(2) statistics. Finally, the respective probability indices of [Formula: see text] is obtained using BI as the moving average strategy. The feasibility and effectiveness of MPOSPCA are demonstrated through a simple numerical example and the fed-batch penicillin fermentation process.

  20. Improved gene prediction by principal component analysis based autoregressive Yule-Walker method.

    PubMed

    Roy, Manidipa; Barman, Soma

    2016-01-10

    Spectral analysis using Fourier techniques is popular with gene prediction because of its simplicity. Model-based autoregressive (AR) spectral estimation gives better resolution even for small DNA segments but selection of appropriate model order is a critical issue. In this article a technique has been proposed where Yule-Walker autoregressive (YW-AR) process is combined with principal component analysis (PCA) for reduction in dimensionality. The spectral peaks of DNA signal are used to detect protein-coding regions based on the 1/3 frequency component. Here optimal model order selection is no more critical as noise is removed by PCA prior to power spectral density (PSD) estimation. Eigenvalue-ratio is used to find the threshold between signal and noise subspaces for data reduction. Superiority of proposed method over fast Fourier Transform (FFT) method and autoregressive method combined with wavelet packet transform (WPT) is established with the help of receiver operating characteristics (ROC) and discrimination measure (DM) respectively.

  1. CRLS-PCA based independent component analysis for fMRI study.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Childress, Anna R; Rao, Hengyi; Detre, John A

    2005-01-01

    Data reduction through conventional principal component analysis is impractical for temporal independent component analysis (tICA) on fMRI data, since the data covariance matrix is too huge to be manipulated. It is also computationally intensive for spatial ICA (sICA) on long time fMRI scans. To solve this problem, a cascade recursive least squared networks based PCA (CRLS-PCA) was used to reduce the fMRI data in this paper. Without the need to compute data covariance matrix CRLS-PCA can extract arbitrary number of PCs directly from the original data, which simultaneously saves time for data reduction. Experiment results were given to evaluate the performance of CRLS-PCA based tICA and sICA in fMRI study.

  2. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  3. [In vitro transdermal delivery of the active fraction of xiangfusiwu decoction based on principal component analysis].

    PubMed

    Li, Zhen-Hao; Liu, Pei; Qian, Da-Wei; Li, Wei; Shang, Er-Xin; Duan, Jin-Ao

    2013-06-01

    The objective of the present study was to establish a method based on principal component analysis (PCA) for the study of transdermal delivery of multiple components in Chinese medicine, and to choose the best penetration enhancers for the active fraction of Xiangfusiwu decoction (BW) with this method. Improved Franz diffusion cells with isolated rat abdomen skins were carried out to experiment on the transdermal delivery of six active components, including ferulic acid, paeoniflorin, albiflorin, protopine, tetrahydropalmatine and tetrahydrocolumbamine. The concentrations of these components were determined by LC-MS/MS, then the total factor scores of the concentrations at different times were calculated using PCA and were employed instead of the concentrations to compute the cumulative amounts and steady fluxes, the latter of which were considered as the indexes for optimizing penetration enhancers. The results showed that compared to the control group, the steady fluxes of the other groups increased significantly and furthermore, 4% azone with 1% propylene glycol manifested the best effect. The six components could penetrate through skin well under the action of penetration enhancers. The method established in this study has been proved to be suitable for the study of transdermal delivery of multiple components, and it provided a scientific basis for preparation research of Xiangfusiwu decoction and moreover, it could be a reference for Chinese medicine research. PMID:23984531

  4. Image-based pupil plane characterization via principal component analysis for EUVL tools

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-03-01

    We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.

  5. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  6. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. PMID:26917856

  7. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter. PMID:21368935

  8. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  9. Aberration measurement based on principal component analysis of aerial images of optimized marks

    NASA Astrophysics Data System (ADS)

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo

    2014-10-01

    We propose an aberration measurement technique based on principal component analysis of aerial images of optimized marks (AMAI-OM). Zernike aberrations are retrieved using a linear relationship between the aerial image and Zernike coefficients. The linear relationship is composed of the principal components (PCs) and regression matrix. A centering process is introduced to compensate position offsets of the measured aerial image. A new test mark is designed in order to improve the centering accuracy and theoretical accuracy of aberration measurement together. The new test marks are composed of three spaces with different widths, and their parameters are optimized by using an accuracy evaluation function. The offsets of the measured aerial image are compensated in the centering process and the adjusted PC coefficients are obtained. Then the Zernike coefficients are calculated according to these PC coefficients using a least square method. The simulations using the lithography simulators PROLITH and Dr.LiTHO validate the accuracy of our method. Compared with the previous aberration measurement technique based on principal component analysis of aerial image (AMAI-PCA), the measurement accuracy of Zernike aberrations under the real measurement condition of the aerial image is improved by about 50%.

  10. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Wang, Xinyu; Zhao, Lin; Feng, Ruyi; Zhang, Liangpei; Xu, Yanyan

    2016-09-01

    Recently, many blind source separation (BSS)-based techniques have been applied to hyperspectral unmixing. In this paper, a new blind spectral unmixing method based on sparse component analysis (BSUSCA) is proposed to solve the problem of highly mixed data. The BSUSCA algorithm consists of an alternative scheme based on two-block alternating optimization, by which we can simultaneously obtain the endmember signatures and their corresponding fractional abundances. According to the spatial distribution of the endmembers, the sparse properties of the fractional abundances are considered in the proposed algorithm. A sparse component analysis (SCA)-based mixing matrix estimation method is applied to update the endmember signatures, and the abundance estimation problem is solved by the alternating direction method of multipliers (ADMM). SCA is utilized for the unmixing due to its various advantages, including the unique solution and robust modeling assumption. The robustness of the proposed algorithm is verified through simulated experimental study. The experimental results using both simulated data and real hyperspectral remote sensing images confirm the high efficiency and precision of the proposed algorithm.

  11. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis

    PubMed Central

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-01-01

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance. PMID:27681452

  12. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    PubMed

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations.

  13. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  14. Small target detection based on three-dimensional principal component analysis in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2014-10-01

    Research on target detection in hyperspectral imagery (HSI) has drawn much attention recently in many areas. Due to the limitation of the HSI sensor's spatial resolution, the target of interest normally occupies only a few pixels, sometimes are even present as subpixels. This may increase the difficulties in target detection. Moreover, in some cases, such as in the rescue and surveillance tasks, small targets are the most significant information. Therefore, it is very difficult but important to effectively detect the interested small target. Using a three-dimensional tensor to model an HSI data cube can preserve as many as possible the original spatial-spectral constraint structures, which is conducive to utilize the whole information for small target detection. This paper proposes a novel and effective algorithm for small target detection in HSI based on three-dimensional principal component analysis (3D-PCA). According to the 3D-PCA, the significant components usually contain most information of imagery, in contrast, the details of small targets exist in the insignificant components. So, after 3D-PCA implemented on the HSI, the significant components which indicate the background of HSI are removed and the insignificant components are used to detect small targets. The algorithm is outstanding thanks to the tensor-based method which is applied to process the HSI directly, making full use of spatial and spectral information, by employing multilinear algebra. Experiments with a real HSI show that the detection probability of interested small targets improved greatly compared to the classical RX detector.

  15. A neural-network appearance-based 3-D object recognition using independent component analysis.

    PubMed

    Sahambi, H S; Khorasani, K

    2003-01-01

    This paper presents results on appearance-based three-dimensional (3-D) object recognition (3DOR) accomplished by utilizing a neural-network architecture developed based on independent component analysis (ICA). ICA has already been applied for face recognition in the literature with encouraging results. In this paper, we are exploring the possibility of utilizing the redundant information in the visual data to enhance the view based object recognition. The underlying premise here is that since ICA uses high-order statistics, it should in principle outperform principle component analysis (PCA), which does not utilize statistics higher than two, in the recognition task. Two databases of images captured by a CCD camera are used. It is demonstrated that ICA did perform better than PCA in one of the databases, but interestingly its performance was no better than PCA in the case of the second database. Thus, suggesting that the use of ICA may not necessarily always give better results than PCA, and that the application of ICA is highly data dependent. Various factors affecting the differences in the recognition performance using both methods are also discussed. PMID:18237997

  16. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  17. Functional activity maps based on significance measures and Independent Component Analysis.

    PubMed

    Martínez-Murcia, F J; Górriz, J M; Ramírez, J; Puntonet, C G; Illán, I A

    2013-07-01

    The use of functional imaging has been proven very helpful for the process of diagnosis of neurodegenerative diseases, such as Alzheimer's Disease (AD). In many cases, the analysis of these images is performed by manual reorientation and visual interpretation. Therefore, new statistical techniques to perform a more quantitative analysis are needed. In this work, a new statistical approximation to the analysis of functional images, based on significance measures and Independent Component Analysis (ICA) is presented. After the images preprocessing, voxels that allow better separation of the two classes are extracted, using significance measures such as the Mann-Whitney-Wilcoxon U-Test (MWW) and Relative Entropy (RE). After this feature selection step, the voxels vector is modelled by means of ICA, extracting a few independent components which will be used as an input to the classifier. Naive Bayes and Support Vector Machine (SVM) classifiers are used in this work. The proposed system has been applied to two different databases. A 96-subjects Single Photon Emission Computed Tomography (SPECT) database from the "Virgen de las Nieves" Hospital in Granada, Spain, and a 196-subjects Positron Emission Tomography (PET) database from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Values of accuracy up to 96.9% and 91.3% for SPECT and PET databases are achieved by the proposed system, which has yielded many benefits over methods proposed on recent works.

  18. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  19. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  20. Cardiac autonomic changes in middle-aged women: identification based on principal component analysis.

    PubMed

    Trevizani, Gabriela A; Nasario-Junior, Olivassé; Benchimol-Barbosa, Paulo R; Silva, Lilian P; Nadal, Jurandir

    2016-07-01

    The purpose of this study was to investigate the application of the principal component analysis (PCA) technique on power spectral density function (PSD) of consecutive normal RR intervals (iRR) aiming at assessing its ability to discriminate healthy women according to age groups: young group (20-25 year-old) and middle-aged group (40-60 year-old). Thirty healthy and non-smoking female volunteers were investigated (13 young [mean ± SD (median): 22·8 ± 0·9 years (23·0)] and 17 Middle-aged [51·7 ± 5·3 years (50·0)]). The iRR sequence was collected during ten minutes, breathing spontaneously, in supine position and in the morning, using a heart rate monitor. After selecting an iRR segment (5 min) with the smallest variance, an auto regressive model was used to estimate the PSD. Five principal component coefficients, extracted from PSD signals, were retained for analysis according to the Mahalanobis distance classifier. A threshold established by logistic regression allowed the separation of the groups with 100% specificity, 83·2% sensitivity and 93·3% total accuracy. The PCA appropriately classified two groups of women in relation to age (young and Middle-aged) based on PSD analysis of consecutive normal RR intervals.

  1. Discriminant Incoherent Component Analysis.

    PubMed

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings. PMID:27008268

  2. Discriminant Incoherent Component Analysis.

    PubMed

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings.

  3. Independent component analysis of instantaneous power-based fMRI.

    PubMed

    Zhong, Yuan; Zheng, Gang; Liu, Yijun; Lu, Guangming

    2014-01-01

    In functional magnetic resonance imaging (fMRI) studies using spatial independent component analysis (sICA) method, a model of "latent variables" is often employed, which is based on the assumption that fMRI data are linear mixtures of statistically independent signals. However, actual fMRI signals are nonlinear and do not automatically meet with the requirement of sICA. To provide a better solution to this problem, we proposed a novel approach termed instantaneous power based fMRI (ip-fMRI) for regularization of fMRI data. Given that the instantaneous power of fMRI signals is a scalar value, it should be a linear mixture that naturally satisfies the "latent variables" model. Based on our simulated data, the curves of accuracy and resulting receiver-operating characteristic curves indicate that the proposed approach is superior to the traditional fMRI in terms of accuracy and specificity by using sICA. Experimental results from human subjects have shown that spatial components of a hand movement task-induced activation reveal a brain network more specific to motor function by ip-fMRI than that by the traditional fMRI. We conclude that ICA decomposition of ip-fMRI may be used to localize energy signal changes in the brain and may have a potential to be applied to detection of brain activity.

  4. A Conditional Entropy-Based Independent Component Analysis for Applications in Human Detection and Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Siana, Linda; Shou, Yu-Wen; Shen, Tzu-Kuei

    2010-12-01

    We present in this paper a modified independent component analysis (mICA) based on the conditional entropy to discriminate unsorted independent components. We make use of the conditional entropy to select an appropriate subset of the ICA features with superior capability in classification and apply support vector machine (SVM) to recognizing patterns of human and nonhuman. Moreover, we use the models of background images based on Gaussian mixture model (GMM) to handle images with complicated backgrounds. Also, the color-based shadow elimination and head models in ellipse shapes are combined to improve the performance of moving objects extraction and recognition in our system. Our proposed tracking mechanism monitors the movement of humans, animals, or vehicles within a surveillance area and keeps tracking the moving pedestrians by using the color information in HSV domain. Our tracking mechanism uses the Kalman filter to predict locations of moving objects for the conditions in lack of color information of detected objects. Finally, our experimental results show that our proposed approach can perform well for real-time applications in both indoor and outdoor environments.

  5. Independent Component Analysis of Instantaneous Power-Based fMRI

    PubMed Central

    Liu, Yijun; Lu, Guangming

    2014-01-01

    In functional magnetic resonance imaging (fMRI) studies using spatial independent component analysis (sICA) method, a model of “latent variables” is often employed, which is based on the assumption that fMRI data are linear mixtures of statistically independent signals. However, actual fMRI signals are nonlinear and do not automatically meet with the requirement of sICA. To provide a better solution to this problem, we proposed a novel approach termed instantaneous power based fMRI (ip-fMRI) for regularization of fMRI data. Given that the instantaneous power of fMRI signals is a scalar value, it should be a linear mixture that naturally satisfies the “latent variables” model. Based on our simulated data, the curves of accuracy and resulting receiver-operating characteristic curves indicate that the proposed approach is superior to the traditional fMRI in terms of accuracy and specificity by using sICA. Experimental results from human subjects have shown that spatial components of a hand movement task-induced activation reveal a brain network more specific to motor function by ip-fMRI than that by the traditional fMRI. We conclude that ICA decomposition of ip-fMRI may be used to localize energy signal changes in the brain and may have a potential to be applied to detection of brain activity. PMID:24738008

  6. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  7. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  8. A remote sensing image fusion method based on feedback sparse component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Yu, Xianchuan; Pei, Wenjing; Hu, Dan; Zhang, Libao

    2015-12-01

    We propose a new remote sensing image (RSI) fusion technique based on sparse blind source separation theory. Our method employs feedback sparse component analysis (FSCA), which can extract the original image in a step-by-step manner and is robust against noise. For RSIs from the China-Brazil Earth Resources Satellite, FSCA can separate useful surface feature information from redundant information and noise. The FSCA algorithm is therefore used to develop two RSI fusion schemes: one focuses on fusing high-resolution and multi-spectral images, while the other fuses synthetic aperture radar bands. The experimental results show that the proposed method can preserve spectral and spatial details of the source images. For certain evaluation indexes, our method performs better than classical fusion methods.

  9. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data.

    PubMed

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  10. An Image Reconstruction Algorithm for Electrical Capacitance Tomography Based on Robust Principle Component Analysis

    PubMed Central

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-01-01

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. PMID:23385418

  11. Intelligent Surveillance System Design Based on Independent Component Analysis and Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Xie, Long; Ogawa, Masatoshi; Kigawa, Youichi; Ogai, Harutoshi

    This paper explores the development of a real time intelligent surveillance system using the technology of pattern recognition based on independent component analysis (ICA) and a novel matching method as a reaction to perceptions of insecurity in sensitive spaces. An array of motion images of people are caught by micro digital cameras on board and transferred through wireless network to FPGA board. The feature points of the shot image and the image in database are extracted out using ICA algorithm in embedded PowerPC. The most similar images are picked up from the image database, which is classified to different clusters, and the potential insecurity level of invaders is detected. Furthermore the respective locations are connected by wireless network. The system of hardware and software co-design is implemented on Xilinx FPGA with the performance of high efficiency, low power consumption and easy integration with other devices.

  12. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  13. Sex-based differences in lifting technique under increasing load conditions: A principal component analysis.

    PubMed

    Sheppard, P S; Stevenson, J M; Graham, R B

    2016-05-01

    The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. PMID:26851478

  14. Application of independent component analysis in target trajectory prediction based on moving platform

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Mao, Yao; Gan, Xun; Tian, Jing

    2015-10-01

    In Electro-Optical tracking systems, compound control is used to keep high-precision tracking of fast targets by predicting the trajectory of the target. Traditional ground based Electro-Optical tracking system uses encoder data and target missing quantity read from image sensors to achieve the target trajectory by using prediction filtering techniques. Compared with the traditional ground based systems, relative angle between the tracking system and the ground cannot be read directly from encoder data in an Electro -Optical tracking system based on moving platform. Thus the combination of inertial sensors' data and target missing quantity is required to composite the trajectory of targets. However, the output of the inertial sensors contains not only the information of the target's motion, but also the residual error of vibration suppression. The existence of vibration suppression residual error affects the trajectory prediction accuracy, thereby reducing compensation precision and the stability of the compound control system. Independent component analysis (ICA) method that can effectively separate the source signals from the measurement signals is introduced to target trajectory prediction field in this paper. An experimental system based on the method is built by settling a small dual-axis disturbance platform, which is taken as the stable platform, on a large dual-axis disturbance platform, which is used to simulate the motion of the moving platform. The result shows that the vibration residual is separated and subtracted from the combined motion data. The target motion is therefore obtained and the feasibility of the method is proved .

  15. Metabolic Module Mining Based on Independent Component Analysis in Arabidopsis thaliana

    PubMed Central

    Han, Xiao; Chen, Cong; Hyun, Tae Kyung; Kumar, Ritesh; Kim, Jae-Yean

    2012-01-01

    Independent Component Analysis (ICA) has been introduced as one of the useful tools for gene-functional discovery in animals. However, this approach has been poorly utilized in the plant sciences. In the present study, we have exploited ICA combined with pathway enrichment analysis to address the statistical challenges associated with genome-wide analysis in plant system. To generate an Arabidopsis metabolic platform, we collected 4,373 Affymetrix ATH1 microarray datasets. Out of the 3,232 metabolic genes and transcription factors, 99.47% of these genes were identified in at least one component, indicating the coverage of most of the metabolic pathways by the components. During the metabolic pathway enrichment analysis, we found components that indicate an independent regulation between the isoprenoid biosynthesis pathways. We also utilized this analysis tool to investigate some transcription factors involved in secondary cell wall biogenesis. This approach has identified remarkably more transcription factors compared to previously reported analysis tools. A website providing user-friendly searching and downloading of the entire dataset analyzed by ICA is available at http://kimjy.gnu.ac.kr/ICA.files/slide0002.htm. ICA combined with pathway enrichment analysis might provide a powerful approach for the extraction of the components responsible for a biological process of interest in plant systems. PMID:22960738

  16. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  17. Contact- and distance-based principal component analysis of protein dynamics

    SciTech Connect

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  18. Deformation density components analysis of fullerene-based anti-HIV drugs.

    PubMed

    Fakhraee, Sara; Souri, Maryam

    2014-11-01

    Deformation density analysis is performed on fullerene-based anti-HIV agents to investigate the influence of charge redistribution on the capability of binding to HIV enzymes. Two types of HIV inhibitors including malonic acid- and amino acid-type C60 derivatives are considered to study. Total deformation density and its components including orbital relaxation and kinetic energy pressure are obtained for C60 derivatives. The deformation natural orbitals for each component of deformation density are assessed and their amounts of charge displacement are quantified to evaluate the binding affinity of HIV inhibitors. The results show that the orbital relaxation plays a more prominent role in deformation of electron density of studied compounds. Among the considered drugs, the amino acid-type derivatives, N-(carboxymethyl)-2,5-dicarboxylic fulleropyrrolidines, show the most charge displacement. Moreover, the investigation into the deformation density of amino acid-type functional groups on C60 reveals that the connection of functional groups to the 5,6-ring junction results more displaced charge than the connection to the 6,6-ring junction.

  19. Robust principal component analysis-based four-dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.

  20. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    PubMed Central

    2010-01-01

    Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036

  1. Robust principal component analysis-based four-dimensional computed tomography.

    PubMed

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490

  2. Reduced order model based on principal component analysis for process simulation and optimization

    SciTech Connect

    Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.

  3. Raman Based Process Monitor for Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, S.; Levitskaia, T.; Schlahta, St.

    2008-07-01

    A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval. (authors)

  4. Raman Based Process Monitor For Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.; Schlahta, Stephan N.

    2008-05-27

    ABSTRACT A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval.

  5. SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components

    SciTech Connect

    Wendt, R

    2014-06-15

    Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research

  6. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  7. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis

    PubMed Central

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  8. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  9. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  10. [Spine disc MR image analysis using improved independent component analysis based active appearance model and Markov random field].

    PubMed

    Hao, Shijie; Zhan, Shu; Jiang, Jianguo; Li, Hong; Ian, Rosse

    2010-02-01

    As there are not many research reports on segmentation and quantitative analysis of soft tissues in lumbar medical images, this paper presents an algorithm for segmenting and quantitatively analyzing discs in lumbar Magnetic Resonance Imaging (MRI). Vertebrae are first segmented using improved Independent component analysis based active appearance model (ICA-AAM), and lumbar curve is obtained with Minimum Description Length (MDL); based on these results, fast and unsupervised Markov Random Field (MRF) disc segmentation combining disc imaging features and intensity profile is further achieved; finally, disc herniation is quantitatively evaluated. The experiment proves that the proposed algorithm is fast and effective, thus providing doctors with aid in diagnosing and curing lumbar disc herniation.

  11. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    SciTech Connect

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  12. The Langat River water quality index based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Mohd Ali, Zalina; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan

    2013-04-01

    River Water Quality Index (WQI) is calculated using an aggregation function of the six water quality sub-indices variables, together with their relative importance or weights respectively. The formula is used by the Department of Environment to indicate a general status of the rivers in Malaysia. The six elected water quality variables used in the formula are, namely: suspended solids (SS), biochemical oxygen demand (BOD), ammoniacal nitrogen (AN), chemical oxygen demand (COD), dissolved oxygen (DO) and pH. The sub-indices calculations, determined by quality rating curve and their weights, were based on expert opinions. However, the use of sub-indices and the relative importance established in the formula is very subjective in nature and does not consider the inter-relationships among the variables. The relationships of the variables are important due to the nature of multi-dimensionality and complex characteristics found in river water. Therefore, a well-known multivariate technique, i.e. Principal Component Analysis (PCA) was proposed to re-calculate the waterquality index specifically in Langat River based on the inter-relationship approach. The application of this approach is not well-studied in river water quality index development studies in Malaysia. Hence, the approach in the study is relevant and important since the first river water quality development took place in 1981. The PCA results showed that the weights obtained indicate the difference in ranking of the relative importance for particular variables compared to the classical approaches used in WQI-DOE. Based on the new weights, the Langat River water quality index was calculated and the comparison between both indexes was also discussed in this paper.

  13. Day-ahead crude oil price forecasting using a novel morphological component analysis based model.

    PubMed

    Zhu, Qing; He, Kaijian; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  14. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  15. Cistanches identification based on fluorescent spectral imaging technology combined with principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Dong, Jia; Huang, Furong; Li, Yuanpeng; Xiao, Chi; Xian, Ruiyi; Ma, Zhiguo

    2015-03-01

    In this study, fluorescent spectral imaging technology combined with principal component analysis (PCA) and artificial neural networks (ANNs) was used to identify Cistanche deserticola, Cistanche tubulosa and Cistanche sinensis, which are traditional Chinese medicinal herbs. The fluorescence spectroscopy imaging system acquired the spectral images of 40 cistanche samples, and through image denoising, binarization processing to make sure the effective pixels. Furthermore, drew the spectral curves whose data in the wavelength range of 450-680 nm for the study. Then preprocessed the data by first-order derivative, analyzed the data through principal component analysis and artificial neural network. The results shows: Principal component analysis can generally distinguish cistanches, through further identification by neural networks makes the results more accurate, the correct rate of the testing and training sets is as high as 100%. Based on the fluorescence spectral imaging technique and combined with principal component analysis and artificial neural network to identify cistanches is feasible.

  16. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  17. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  18. Development of a graded index microlens based fiber optical trap and its characterization using principal component analysis.

    PubMed

    Nylk, J; Kristensen, M V G; Mazilu, M; Thayil, A K; Mitchell, C A; Campbell, E C; Powis, S J; Gunn-Moore, F J; Dholakia, K

    2015-04-01

    We demonstrate a miniaturized single beam fiber optical trapping probe based on a high numerical aperture graded index (GRIN) micro-objective lens. This enables optical trapping at a distance of 200μm from the probe tip. The fiber trapping probe is characterized experimentally using power spectral density analysis and an original approach based on principal component analysis for accurate particle tracking. Its use for biomedical microscopy is demonstrated through optically mediated immunological synapse formation.

  19. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  20. [Qualitative analysis of chemical constituents in Si-Wu Decoction based on TCM component database].

    PubMed

    Wang, Zhen-fang; Zhao, Yang; Fan, Zi-quan; Kang, Li-ping; Qiao, Li-rui; Zhang, Jie; Gao, Yue; Ma, Bai-ping

    2015-10-01

    In order to clarify the chemical constituents of Si-Wu Decoction rapidly and holistically, we analyzed the ethanol extract of Si-Wu Decoction by UPLC/Q-TOF-MSE and UNIFI which based on traditional Chinese medicine database, the probable structures of 113 compounds were identified. The results show that this method can rapidly and effectively characterize the chemical compounds of Si-Wu Decoction and provide a new solution for identification of components from complex TCM extract.

  1. Sparse representation based latent components analysis for machinery weak fault detection

    NASA Astrophysics Data System (ADS)

    Tang, Haifeng; Chen, Jin; Dong, Guangming

    2014-06-01

    Weak machinery fault detection is a difficult task because of two main reasons (1) At the early stage of fault development, signature of fault related component performs incompletely and is quite different from that at the apparent failure stage. In most instances, it seems almost identical with the normal operating state. (2) The fault feature is always submerged and distorted by relatively strong background noise and macro-structural vibrations even if the fault component already performs completely, especially when the structure of fault components and interference are close. To solve these problems, we should penetrate into the underlying structure of the signal. Sparse representation provides a class of algorithms for finding succinct representations of signal that capture higher-level features in the data. With the purpose of extracting incomplete or seriously overwhelmed fault components, a sparse representation based latent components decomposition method is proposed in this paper. As a special case of sparse representation, shift-invariant sparse coding algorithm provides an effective basis functions learning scheme for capturing the underlying structure of machinery fault signal by iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. Among these basis functions, fault feature can be probably contained and extracted if optimal latent component is filtered. The proposed scheme is applied to analyze vibration signals of both rolling bearings and gears. Experiment of accelerated lifetime test of bearings validates the proposed method's ability of detecting early fault. Besides, experiments of fault bearings and gears with heavy noise and interference show the approach can effectively distinguish subtle differences between defect and interference. All the experimental data are analyzed by wavelet shrinkage and basis pursuit de-noising (BPDN) method for comparison.

  2. Envelope extraction based dimension reduction for independent component analysis in fault diagnosis of rolling element bearing

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Na, Jing; Li, Bin; Fung, Rong-Fong

    2014-06-01

    A robust feature extraction scheme for the rolling element bearing (REB) fault diagnosis is proposed by combining the envelope extraction and the independent component analysis (ICA). In the present approach, the envelope extraction is not only utilized to obtain the impulsive component corresponding to the faults from the REB, but also to reduce the dimension of vibration sources included in the sensor-picked signals. Consequently, the difficulty for applying the ICA algorithm under the conditions that the sensor number is limited and the source number is unknown can be successfully eliminated. Then, the ICA algorithm is employed to separate the envelopes according to the independence of vibration sources. Finally, the vibration features related to the REB faults can be separated from disturbances and clearly exposed by the envelope spectrum. Simulations and experimental tests are conducted to validate the proposed method.

  3. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  4. FPGA-based real-time blind source separation with principal component analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Matthew; Meyer-Baese, Uwe

    2015-05-01

    Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.

  5. [Near-infrared spectrum quantitative analysis model based on principal components selected by elastic net].

    PubMed

    Chen, Wan-hui; Liu, Xu-hua; He, Xiong-kui; Min, Shun-geng; Zhang, Lu-da

    2010-11-01

    Elastic net is an improvement of the least-squares method by introducing in L1 and L2 penalties, and it has the advantages of the variable selection. The quantitative analysis model build by Elastic net can improve the prediction accuracy. Using 89 wheat samples as the experiment material, the spectrum principal components of the samples were selected by Elastic net. The analysis model was established for the near-infrared spectrum and the wheat's protein content, and the feasibility of using Elastic net to establish the quantitative analysis model was confirmed. In experiment, the 89 wheat samples were randomly divided into two groups, with 60 samples being the model set and 29 samples being the prediction set. The 60 samples were used to build analysis model to predict the protein contents of the 29 samples, and correlation coefficient (R) of the predicted value and chemistry observed value was 0. 984 9, with the mean relative error being 2.48%. To further investigate the feasibility and stability of the model, the 89 samples were randomly selected five times, with 60 samples to be model set and 29 samples to be prediction set. The five groups of principal components which were selected by Elastic net for building model were basically consistent, and compared with the PCR and PLS method, the model prediction accuracies were all better than PCR and similar with PLS. In view of the fact that Elastic net can realize the variable selection and the model has good prediction, it was shown that Elastic net is suitable method for building chemometrics quantitative analysis model. PMID:21284156

  6. [Removal Algorithm of Power Line Interference in Electrocardiogram Based on Morphological Component Analysis and Ensemble Empirical Mode Decomposition].

    PubMed

    Zhao, Wei; Xiao, Shixiao; Zhang, Baocan; Huang, Xiaojing; You, Rongyi

    2015-12-01

    Electrocardiogram (ECG) signals are susceptible to be disturbed by 50 Hz power line interference (PLI) in the process of acquisition and conversion. This paper, therefore, proposes a novel PLI removal algorithm based on morphological component analysis (MCA) and ensemble empirical mode decomposition (EEMD). Firstly, according to the morphological differences in ECG waveform characteristics, the noisy ECG signal was decomposed into the mutated component, the smooth component and the residual component by MCA. Secondly, intrinsic mode functions (IMF) of PLI was filtered. The noise suppression rate (NSR) and the signal distortion ratio (SDR) were used to evaluate the effect of de-noising algorithm. Finally, the ECG signals were re-constructed. Based on the experimental comparison, it was concluded that the proposed algorithm had better filtering functions than the improved Levkov algorithm, because it could not only effectively filter the PLI, but also have smaller SDR value. PMID:27079083

  7. Research of power plant parameter based on the principal component analysis method

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Zhang, Di

    2012-01-01

    With the development of power technology and the expansion of power plants, plant operation monitoring points are increasing at the same time. A large number of data parameters let technicians obtain more information about unit running, but adjusting and processing the data processing are inconvenient. Principal Component Analysis was used for the real-time data analysis in the thermal power plant unit running. New variables can be obtained from the multi-parameter indicators by knowledge mining. Since the new variables are pairwise uncorrelated which can reflect most of original data information, they can provide the basis for optimal operation and adjustment of the actual production units. It will also play an important role in the factory data processing and related fields.

  8. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image. PMID:20508708

  9. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  10. Online Handwritten Signature Verification Using Neural Network Classifier Based on Principal Component Analysis

    PubMed Central

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  11. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  12. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%.

  13. Noise analysis in photonic true time delay systems based on broadband optical source and dispersion components.

    PubMed

    Xue, Xiaoxiao; Wen, He; Zheng, Xiaoping; Zhang, Hanyi; Guo, Yili; Zhou, Bingkun

    2009-02-01

    The noise in photonic true time delay systems based on broadband optical source and dispersion components is investigated. It is found that the beat noise induced by the optical source begins to dominate and grows far larger than other noise terms quickly, as long as the detected optical power is above some certain value P(thr). When the system dispersion is nonzero, the output carrier-to-noise ratio (CNR) will change periodically with the optical bandwidth due to the noise power increment and the dispersion induced radio frequency signal power degradation. The maximum CNR is the peak value of the first period. For a set of specified system conditions, the P(thr) is calculated to be -21 dBm, and the optimal optical bandwidth is 0.8 nm, at which the maximum CNR is 93.3 dB by considering the noise in a 1 Hz bandwidth. The results are verified experimentally.

  14. Large sample inference for a win ratio analysis of a composite outcome based on prioritized components.

    PubMed

    Bebu, Ionut; Lachin, John M

    2016-01-01

    Composite outcomes are common in clinical trials, especially for multiple time-to-event outcomes (endpoints). The standard approach that uses the time to the first outcome event has important limitations. Several alternative approaches have been proposed to compare treatment versus control, including the proportion in favor of treatment and the win ratio. Herein, we construct tests of significance and confidence intervals in the context of composite outcomes based on prioritized components using the large sample distribution of certain multivariate multi-sample U-statistics. This non-parametric approach provides a general inference for both the proportion in favor of treatment and the win ratio, and can be extended to stratified analyses and the comparison of more than two groups. The proposed methods are illustrated with time-to-event outcomes data from a clinical trial.

  15. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  16. A performance measure based on principal component analysis for ceramic armor integrity

    NASA Astrophysics Data System (ADS)

    Rollins, D. K., Sr.; Stiehl, C. K.; Kotz, K.; Beverlin, L.; Brasche, L.

    2012-05-01

    Principal Component Analysis (PCA) has been applied to thru-transmission ultrasound data taken on ceramic armor. PCA will help find and accentuate differences within the tile, making it easier to find differences. First, the thru-transmission ultrasound data was analyzed. As the ultrasound transducer moves along the surface of the tile, the signal from the sound wave is measured as it reaches the receiver, giving a time signal at each tile location. The information from this time signal is dissected into ten equal segments, and the maximum peak is measured within each segment, or gate. This gives ten measurements at each tile location that correspond to tile depth An image can be made for each of the ten gate measurements. PCA was applied to this data for all of the tile samples, and a performance measure was developed from the loading information. A performance measure was developed and tested on six samples from each of the panels. When these performance measures are compared to the results of the ballistics tests, it can be seen that the performance measure correlates well to the penetration velocities found from the ballistics tests.

  17. Study of T-wave morphology parameters based on Principal Components Analysis during acute myocardial ischemia

    NASA Astrophysics Data System (ADS)

    Baglivo, Fabricio Hugo; Arini, Pedro David

    2011-12-01

    Electrocardiographic repolarization abnormalities can be detected by Principal Components Analysis of the T-wave. In this work we studied the efect of signal averaging on the mean value and reproducibility of the ratio of the 2nd to the 1st eigenvalue of T-wave (T21W) and the absolute and relative T-wave residuum (TrelWR and TabsWR) in the ECG during ischemia induced by Percutaneous Coronary Intervention. Also, the intra-subject and inter-subject variability of T-wave parameters have been analyzed. Results showed that TrelWR and TabsWR evaluated from the average of 10 complexes had lower values and higher reproducibility than those obtained from 1 complex. On the other hand T21W calculated from 10 complexes did not show statistical diferences versus the T21W calculated on single beats. The results of this study corroborate that, with a signal averaging technique, the 2nd and the 1st eigenvalue are not afected by noise while the 4th to 8th eigenvalues are so much afected by this, suggesting the use of the signal averaged technique before calculation of absolute and relative T-wave residuum. Finally, we have shown that T-wave morphology parameters present high intra-subject stability.

  18. [Determination of the Plant Origin of Licorice Oil Extract, a Natural Food Additive, by Principal Component Analysis Based on Chemical Components].

    PubMed

    Tada, Atsuko; Ishizuki, Kyoko; Sugimoto, Naoki; Yoshimatsu, Kayo; Kawahara, Nobuo; Suematsu, Takako; Arifuku, Kazunori; Fukai, Toshio; Tamura, Yukiyoshi; Ohtsuki, Takashi; Tahara, Maiko; Yamazaki, Takeshi; Akiyama, Hiroshi

    2015-01-01

    "Licorice oil extract" (LOE) (antioxidant agent) is described in the notice of Japanese food additive regulations as a material obtained from the roots and/or rhizomes of Glycyrrhiza uralensis, G. inflata or G. glabra. In this study, we aimed to identify the original Glycyrrhiza species of eight food additive products using LC/MS. Glabridin, a characteristic compound in G. glabra, was specifically detected in seven products, and licochalcone A, a characteristic compound in G. inflata, was detected in one product. In addition, Principal Component Analysis (PCA) (a kind of multivariate analysis) using the data of LC/MS or (1)H-NMR analysis was performed. The data of thirty-one samples, including LOE products used as food additives, ethanol extracts of various Glycyrrhiza species and commercially available Glycyrrhiza species-derived products were assessed. Based on the PCA results, the majority of LOE products was confirmed to be derived from G. glabra. This study suggests that PCA using (1)H-NMR analysis data is a simple and useful method to identify the plant species of origin of natural food additive products. PMID:26537652

  19. [Determination of the Plant Origin of Licorice Oil Extract, a Natural Food Additive, by Principal Component Analysis Based on Chemical Components].

    PubMed

    Tada, Atsuko; Ishizuki, Kyoko; Sugimoto, Naoki; Yoshimatsu, Kayo; Kawahara, Nobuo; Suematsu, Takako; Arifuku, Kazunori; Fukai, Toshio; Tamura, Yukiyoshi; Ohtsuki, Takashi; Tahara, Maiko; Yamazaki, Takeshi; Akiyama, Hiroshi

    2015-01-01

    "Licorice oil extract" (LOE) (antioxidant agent) is described in the notice of Japanese food additive regulations as a material obtained from the roots and/or rhizomes of Glycyrrhiza uralensis, G. inflata or G. glabra. In this study, we aimed to identify the original Glycyrrhiza species of eight food additive products using LC/MS. Glabridin, a characteristic compound in G. glabra, was specifically detected in seven products, and licochalcone A, a characteristic compound in G. inflata, was detected in one product. In addition, Principal Component Analysis (PCA) (a kind of multivariate analysis) using the data of LC/MS or (1)H-NMR analysis was performed. The data of thirty-one samples, including LOE products used as food additives, ethanol extracts of various Glycyrrhiza species and commercially available Glycyrrhiza species-derived products were assessed. Based on the PCA results, the majority of LOE products was confirmed to be derived from G. glabra. This study suggests that PCA using (1)H-NMR analysis data is a simple and useful method to identify the plant species of origin of natural food additive products.

  20. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix

    PubMed Central

    Gao, H; Zhang, T; Wu, Y; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-01-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent ‘super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle. PMID:24984606

  1. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012).

  2. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    NASA Astrophysics Data System (ADS)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  3. Fuzzy Clusterwise Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Desarbo, Wayne S.; Takane, Yoshio

    2007-01-01

    Generalized Structured Component Analysis (GSCA) was recently introduced by Hwang and Takane (2004) as a component-based approach to path analysis with latent variables. The parameters of GSCA are estimated by pooling data across respondents under the implicit assumption that they all come from a single, homogenous group. However, as has been…

  4. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, H.D.; Ayers, C.W.; Casada, D.A.

    1996-11-26

    A new instrument is described for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician. 4 figs.

  5. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  6. Dissecting the phenotypic components of crop plant growth and drought responses based on high-throughput image analysis.

    PubMed

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-12-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues.

  7. Interpretable functional principal component analysis.

    PubMed

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data.

  8. Design and Validation of a Morphing Myoelectric Hand Posture Controller Based on Principal Component Analysis of Human Grasping

    PubMed Central

    Segil, Jacob L.; Weir, Richard F. ff.

    2015-01-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately. PMID:23649286

  9. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  10. Principal Component Analysis-Based Pattern Analysis of Dose-Volume Histograms and Influence on Rectal Toxicity

    SciTech Connect

    Soehn, Matthias Alber, Markus; Yan Di

    2007-09-01

    Purpose: The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. Methods and Materials: PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as 'eigenmodes,' which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Results: Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe {approx}94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ({approx}40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. Conclusions: PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.

  11. Experimental Analysis of the Effective Components of Problem-Based Learning

    ERIC Educational Resources Information Center

    Pease, Maria A.; Kuhn, Deanna

    2011-01-01

    Problem-based learning (PBL) is widely endorsed as a desirable learning method, particularly in science. Especially in light of the method's heavy demand on resources, evidence-based practice is called for. Rigorous studies of the method's effectiveness, however, are scarce. In Study 1, college students enrolled in an elementary physics course…

  12. Independent Component Analysis-Based Identification of Covariance Patterns of Microstructural White Matter Damage in Alzheimer’s Disease

    PubMed Central

    Ouyang, Xin; Chen, Kewei; Yao, Li; Wu, Xia; Zhang, Jiacai; Li, Ke; Jin, Zhen; Guo, Xiaojuan

    2015-01-01

    The existing DTI studies have suggested that white matter damage constitutes an important part of the neurodegenerative changes in Alzheimer’s disease (AD). The present study aimed to identify the regional covariance patterns of microstructural white matter changes associated with AD. In this study, we applied a multivariate analysis approach, independent component analysis (ICA), to identify covariance patterns of microstructural white matter damage based on fractional anisotropy (FA) skeletonised images from DTI data in 39 AD patients and 41 healthy controls (HCs) from the Alzheimer’s Disease Neuroimaging Initiative database. The multivariate ICA decomposed the subject-dimension concatenated FA data into a mixing coefficient matrix and a source matrix. Twenty-eight independent components (ICs) were extracted, and a two sample t-test on each column of the corresponding mixing coefficient matrix revealed significant AD/HC differences in ICA weights for 7 ICs. The covariant FA changes primarily involved the bilateral corona radiata, the superior longitudinal fasciculus, the cingulum, the hippocampal commissure, and the corpus callosum in AD patients compared to HCs. Our findings identified covariant white matter damage associated with AD based on DTI in combination with multivariate ICA, potentially expanding our understanding of the neuropathological mechanisms of AD. PMID:25775003

  13. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  14. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: the case of domain motions.

    PubMed

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-14

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  15. ECG-based gating in ultra high field cardiovascular magnetic resonance using an independent component analysis approach

    PubMed Central

    2013-01-01

    Background In Cardiovascular Magnetic Resonance (CMR), the synchronization of image acquisition with heart motion is performed in clinical practice by processing the electrocardiogram (ECG). The ECG-based synchronization is well established for MR scanners with magnetic fields up to 3 T. However, this technique is prone to errors in ultra high field environments, e.g. in 7 T MR scanners as used in research applications. The high magnetic fields cause severe magnetohydrodynamic (MHD) effects which disturb the ECG signal. Image synchronization is thus less reliable and yields artefacts in CMR images. Methods A strategy based on Independent Component Analysis (ICA) was pursued in this work to enhance the ECG contribution and attenuate the MHD effect. ICA was applied to 12-lead ECG signals recorded inside a 7 T MR scanner. An automatic source identification procedure was proposed to identify an independent component (IC) dominated by the ECG signal. The identified IC was then used for detecting the R-peaks. The presented ICA-based method was compared to other R-peak detection methods using 1) the raw ECG signal, 2) the raw vectorcardiogram (VCG), 3) the state-of-the-art gating technique based on the VCG, 4) an updated version of the VCG-based approach and 5) the ICA of the VCG. Results ECG signals from eight volunteers were recorded inside the MR scanner. Recordings with an overall length of 87 min accounting for 5457 QRS complexes were available for the analysis. The records were divided into a training and a test dataset. In terms of R-peak detection within the test dataset, the proposed ICA-based algorithm achieved a detection performance with an average sensitivity (Se) of 99.2%, a positive predictive value (+P) of 99.1%, with an average trigger delay and jitter of 5.8 ms and 5.0 ms, respectively. Long term stability of the demixing matrix was shown based on two measurements of the same subject, each being separated by one year, whereas an averaged detection

  16. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  17. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  18. Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.

    PubMed

    Siuly, Siuly; Li, Yan

    2015-04-01

    The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.

  19. Research on matching area selection criteria for gravity gradient navigation based on principal component analysis and analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Li, Kaihan; Tang, Jianqiao; Ma, Jie

    2015-12-01

    The matching area selection is the foundation of gravity gradient aided navigation. In this paper, a gravity gradient matching area selection criterion is proposed, based on the principal component analysis (PCA) and analytic hierarchy process (AHP). Firstly, the features of gravity gradient are extracted and nine gravity gradient characteristic parameters are obtained. Secondly, combining PCA with AHP, a PA model is built and the nine characteristic parameters are fused based on it. At last, the gravity gradient matching area selection criterion is given. By using this criterion, gravity gradient area can be divided into matching area and non-matching area. The simulation results show that gravity gradient position effect in the selected matching area is superior to the matching area, and the matching rate is greater than 90%, the position error is less than a gravity gradient grid.

  20. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  1. Integration of Multiple Components in Polystyrene-based Microfluidic Devices Part 2: Cellular Analysis

    PubMed Central

    Anderson, Kari B.; Halpin, Stephen T.; Johnson, Alicia S.; Martin, R. Scott; Spence, Dana M.

    2012-01-01

    In Part II of this series describing the use of polystyrene (PS) devices for microfluidic-based cellular assays, various cellular types and detection strategies are employed to determine three fundamental assays often associated with cells. Specifically, using either integrated electrochemical sensing or optical measurements with a standard multi-well plate reader, cellular uptake, production, or release of important cellular analytes are determined on a PS-based device. One experiment involved the fluorescence measurement of nitric oxide (NO) produced within an endothelial cell line following stimulation with ATP. The result was a four-fold increase in NO production (as compared to a control), with this receptor-based mechanism of NO production verifying the maintenance of cell receptors following immobilization onto the PS substrate. The ability to monitor cellular uptake was also demonstrated by optical determination of Ca2+ into endothelial cells following stimulation with the Ca2+ ionophore A20317. The result was a significant increase (42%) in the calcium uptake in the presence of the ionophore, as compared to a control (17%) (p < 0.05). Finally, the release of catecholamines from a dopaminergic cell line (PC 12 cells) was electrochemically monitored, with the electrodes being embedded into the PS-based device. The PC 12 cells had better adherence on the PS devices, as compared to use of PDMS. Potassium-stimulation resulted in the release of 114 ± 11 µM catecholamines, a significant increase (p < 0.05) over the release from cells that had been exposed to an inhibitor (reserpine, 20 ± 2 µM of catecholamines). The ability to successfully measure multiple analytes, generated in different means from various cells under investigation, suggests that PS may be a useful material for microfluidic device fabrication, especially considering the enhanced cell adhesion to PS, its enhanced rigidity/amenability to automation, and its ability to enable a wider range of

  2. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  3. Failure Analysis of Ceramic Components

    SciTech Connect

    B.W. Morris

    2000-06-29

    Ceramics are being considered for a wide range of structural applications due to their low density and their ability to retain strength at high temperatures. The inherent brittleness of monolithic ceramics requires a departure from the deterministic design philosophy utilized to analyze metallic structural components. The design program ''Ceramic Analysis and Reliability Evaluation of Structures Life'' (CARES/LIFE) developed by NASA Lewis Research Center uses a probabilistic approach to predict the reliability of monolithic components under operational loading. The objective of this study was to develop an understanding of the theories used by CARES/LIFE to predict the reliability of ceramic components and to assess the ability of CARES/LIFE to accurately predict the fast fracture behavior of monolithic ceramic components. A finite element analysis was performed to determine the temperature and stress distribution of a silicon carbide O-ring under diametral compression. The results of the finite element analysis were supplied as input into CARES/LIFE to determine the fast fracture reliability of the O-ring. Statistical material strength parameters were calculated from four-point flexure bar test data. The predicted reliability showed excellent correlation with O-ring compression test data indicating that the CARES/LIFE program can be used to predict the reliability of ceramic components subjected to complicated stress states using material properties determined from simple uniaxial tensile tests.

  4. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  5. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    SciTech Connect

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-07

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C{sub α} atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  6. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis.

    PubMed

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C(α) atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  7. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  8. Analysis of the correlation between dipeptides and taste differences among soy sauces by using metabolomics-based component profiling.

    PubMed

    Yamamoto, Shinya; Shiga, Kazuki; Kodama, Yukako; Imamura, Miho; Uchida, Riichiro; Obata, Akio; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-07-01

    Characterizing the relationships between the components and taste differences among soy sauces can help evaluate and improve the quality of soy sauces. Although previous studies have reported that certain taste-active dipeptides, the relationships between taste differences and dipeptides of soy sauces are unknown. Therefore, our objective in this study was to investigate the correlations between the dipeptides and the taste differences among soy sauces. To analyze the dipeptides, we constructed an analytical method using liquid chromatography/tandem mass spectrometry (LC/MS/MS) in multiple reaction monitoring mode. Based on this method, we detected 237 dipeptides, the largest number ever detected in soy sauce research. Next, orthogonal projections to latent structures regressions were performed. The data matrix of components, including dipeptides and other low-molecular-weight hydrophilic components obtained from gas chromatography/mass spectrometry (GC/MS), served as explanatory variables (366 in total), whereas a sensory data matrix obtained using quantitative descriptive analysis served as the response variable. The accuracy of models for the sweetness and saltiness differences constructed using the LC/MS/MS and GC/MS data matrix were higher than did models constructed using only the GC/MS data matrix. As a result of investigation of the correlation between the dipeptides and taste differences among soy sauces by using variable importance in the projection (VIP) score, many dipeptides showed the high correlation with taste differences. Specifically, Ile-Gln, Pro-Lys, Ile-Glu, Thr-Phe, and Leu-Gln showed the high VIP score on sweet differences. This study is the first report that reveals the correlations between the dipeptides and taste differences among soy sauces.

  9. Application of independent component analysis method in real-time spectral analysis of gaseous mixtures for acousto-optical spectrometers based on differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Fadeyev, A. V.; Pozhar, V. E.

    2012-10-01

    It is discussed the reliability problem of time-optimized method for remote optical spectral analysis of gas-polluted ambient air. The method based on differential optical absorption spectroscopy (DOAS) enables fragmentary spectrum registration (FSR) and is suitable for random-spectral-access (RSA) optical spectrometers like acousto-optical (AO) ones. Here, it is proposed the algorithm based on statistical method of independent component analysis (ICA) for estimation of a correctness of absorption spectral lines selection for FSR-method. Implementations of ICA method for RSA-based real-time adaptive systems are considered. Numerical simulations are presented with use of real spectra detected by the trace gas monitoring system GAOS based on AO spectrometer.

  10. High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang

    2016-07-01

    Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.

  11. Ground-roll separation of seismic data based on morphological component analysis in two-dimensional domain

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Hong; Qu, Guang-Zhong; Zhang, Yang; Bi, Yun-Yun; Wang, Jin-Ju

    2016-03-01

    Ground roll is an interference wave that severely degrades the signal-to-noise ratio of seismic data and affects its subsequent processing and interpretation. In this study, according to differences in morphological characteristics between ground roll and reflected waves, we use morphological component analysis based on two-dimensional dictionaries to separate ground roll and reflected waves. Because ground roll is characterized by low-frequency, low-velocity, and dispersion, we select two-dimensional undecimated discrete wavelet transform as a sparse representation dictionary of ground roll. Because of a strong local correlation of the reflected wave, we select two-dimensional local discrete cosine transform as the sparse representation dictionary of reflected waves. A sparse representation model of seismic data is constructed based on a two-dimensional joint dictionary then a block coordinate relaxation algorithm is used to solve the model and decompose seismic record into reflected wave part and ground roll part.The good effects for the synthetic seismic data and application of real seismic data indicate that when using the model, strong-energy ground roll is considerably suppressed and the waveform of the reflected wave is effectively protected.

  12. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  13. Source-Based Morphometry: The Use of Independent Component Analysis to Identify Gray Matter Differences With Application to Schizophrenia

    PubMed Central

    Xu, Lai; Groth, Karyn M.; Pearlson, Godfrey; Schretlen, David J.; Calhoun, Vince D.

    2009-01-01

    We present a multivariate alternative to the voxel-based morphometry (VBM) approach called source-based morphometry (SBM), to study gray matter differences between patients and healthy controls. The SBM approach begins with the same preprocessing procedures as VBM. Next, independent component analysis is used to identify naturally grouping, maximally independent sources. Finally, statistical analyses are used to determine the significant sources and their relationship to other variables. The identified “source networks,” groups of spatially distinct regions with common covariation among subjects, provide information about localization of gray matter changes and their variation among individuals. In this study, we first compared VBM and SBM via a simulation and then applied both methods to real data obtained from 120 chronic schizophrenia patients and 120 healthy controls. SBM identified five gray matter sources as significantly associated with schizophrenia. These included sources in the bilateral temporal lobes, thalamus, basal ganglia, parietal lobe, and frontotemporal regions. None of these showed an effect of sex. Two sources in the bilateral temporal and parietal lobes showed age-related reductions. The most significant source of schizophrenia-related gray matter changes identified by SBM occurred in the bilateral temporal lobe, while the most significant change found by VBM occurred in the thalamus. The SBM approach found changes not identified by VBM in basal ganglia, parietal, and occipital lobe. These findings show that SBM is a multivariate alternative to VBM, with wide applicability to studying changes in brain structure. PMID:18266214

  14. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    PubMed Central

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  15. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  16. Estimating stellar atmospheric parameters, absolute magnitudes and elemental abundances from the LAMOST spectra with Kernel-based Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Xiang, M.-S.; Liu, X.-W.; Shi, J.-R.; Yuan, H.-B.; Huang, Y.; Luo, A.-L.; Zhang, H.-W.; Zhao, Y.-H.; Zhang, J.-N.; Ren, J.-J.; Chen, B.-Q.; Wang, C.; Li, J.; Huo, Z.-Y.; Zhang, W.; Wang, J.-L.; Zhang, Y.; Hou, Y.-H.; Wang, Y.-F.

    2016-10-01

    Accurate determination of stellar atmospheric parameters and elemental abundances is crucial for Galactic archeology via large-scale spectroscopic surveys. In this paper, we estimate stellar atmospheric parameters - effective temperature Teff, surface gravity log g and metallicity [Fe/H], absolute magnitudes MV and MKs, α-element to metal (and iron) abundance ratio [α/M] (and [α/Fe]), as well as carbon and nitrogen abundances [C/H] and [N/H] from the LAMOST spectra with a multivariate regression method based on kernel-based principal component analysis, using stars in common with other surveys (Hipparcos, Kepler, APOGEE) as training data sets. Both internal and external examinations indicate that given a spectral signal-to-noise ratio (SNR) better than 50, our method is capable of delivering stellar parameters with a precision of ˜100 K for Teff, ˜0.1 dex for log g, 0.3 - 0.4 mag for MV and MKs, 0.1 dex for [Fe/H], [C/H] and [N/H], and better than 0.05 dex for [α/M] ([α/Fe]). The results are satisfactory even for a spectral SNR of 20. The work presents first determinations of [C/H] and [N/H] abundances from a vast data set of LAMOST, and, to our knowledge, the first reported implementation of absolute magnitude estimation directly based on the observed spectra. The derived stellar parameters for millions of stars from the LAMOST surveys will be publicly available in the form of value-added catalogues.

  17. Principal components analysis competitive learning.

    PubMed

    López-Rubio, Ezequiel; Ortiz-de-Lazcano-Lobato, Juan Miguel; Muñoz-Pérez, José; Gómez-Ruiz, José Antonio

    2004-11-01

    We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA. Furthermore, every neuron is able to modify its behavior to adapt to the local dimensionality of the input distribution. Hence, our model has a dimensionality estimation capability. The experimental results we present show the dimensionality-reduction capabilities of the model with multisensor images.

  18. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  19. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  20. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  1. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  2. Identification of copper sources in urban surface waters using the principal component analysis based on aquatic parameters.

    PubMed

    Sodre, Fernando Fabriz; dos Anjos, Vanessa Egea; Prestes, Ellen Christine; Grassi, Marco Tadeu

    2005-06-01

    The goal of this work was to identify the sources of copper loads in surface urban waters using principal component analysis under the aquatic parameters data evaluation approach. Water samples from the Irai and Iguacu rivers were collected monthly during a 12-month period at two points located upstream and downstream of a metropolitan region. pH, total alkalinity, dissolved chloride, total suspended solids, dissolved organic matter, total recoverable copper, temperature, and precipitation data provided some reliable information concerning the characteristics and water quality of both rivers. Principal component analysis indicated seasonal and spatial effects on copper concentration and loads in both environments. During the rainy season, non-point sources such as urban run-off are believed to be the major source of copper in both cases. In contrast, during the lower precipitation period, the discharge of raw sewage seems to be the primary source of copper to the Iguacu River, which also exhibited higher total metal concentrations.

  3. [Component analysis of the circulating fluid in an adsorption tower in a P-xylene unit based on raman spectral decomposition].

    PubMed

    Wang, Bin; Dai, Lian-kui

    2015-02-01

    In order to achieve fast and accurate online analysis of the circulating fluid in an adsorption tower in a p-xylene unit, the Raman spectral analysis method is adopted. However, the Raman spectra of the pure components included in the circulating fluid overlap together, and the concentration of each component varies obviously, the present Raman analysis technology needs a large amount of training samples. Therefore, this paper applies Raman spectral decomposition method in component analysis of the circulating fluid. First of all, the Raman spectra of the pure components and the spectra of a few training samples must be measured, and baseline subtraction and mean normalization are applied to obtain pretreated Raman spectra. Then the characteristic wave number range, 680-880 cm(-1), is chosen, and the Raman spectral decomposition method is adopted, to get decomposition coefficients of each component for each training sample. Next, the mathematical model between coefficients and concentrations of each component are built based on all training samples. For a testing sample, the above spectral pretreatment and the spectral decomposition for the same wave number range is adopted, then the decomposition coefficients of each component can be obtained. Based on the built mathematical model, the concentrations of all components can be predicted. Experimental results show that the standard prediction errors for the concentration of toluene, ethylbenzene, p-xylene, m-xylene, o-xylene and p-diethylbenzene are 0.301%, 0.088%, 0.563%, 0.384%, 0.366% and 0.536% respectively. The above method provides a methodological basis for the online analysis of the circulating fluid in adsorption towers.

  4. Principal component analysis-based anatomical motion models for use in adaptive radiation therapy of head and neck cancer patients

    NASA Astrophysics Data System (ADS)

    Chetvertkov, Mikhail A.

    Purpose: To develop standard and regularized principal component analysis (PCA) models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients, assess their potential use in adaptive radiation therapy (ART), and to extract quantitative information for treatment response assessment. Methods: Planning CT (pCT) images of H&N patients were artificially deformed to create "digital phantom" images, which modeled systematic anatomical changes during Radiation Therapy (RT). Artificial deformations closely mirrored patients' actual deformations, and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms), and between pCT and clinical CBCTs. Patient-specific standard PCA (SPCA) and regularized PCA (RPCA) models were built from these synthetic and clinical DVF sets. Eigenvectors, or eigenDVFs (EDVFs), having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Modeled anatomies were used to assess the dose deviations with respect to the planned dose distribution. Results: PCA models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade SPCA's ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes, and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. For dose assessment it has been shown that the modeled dose distribution was different from the planned dose for the parotid glands due to their shrinkage and shift into

  5. System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Wada, Ted S.

    In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the acoustic echo cancellation (AEC) problem. Such a “system” perspective aims for the integration of individual components, or algorithms, into a cohesive unit for the benefit of the system as a whole to cope with real-world enhancement problems. The standard system identification approach by minimizing the mean square error (MSE) of a linear system is sensitive to distortions that greatly affect the quality of the identification result. Therefore, we begin by examining in detail the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of the LMS algorithm when there is an interference at the near end, where the source of distortion may be linear or nonlinear. We provide a thorough derivation and analysis of the error recovery nonlinearity (ERN) that “enhances” the filter estimation error prior to the adaptation to transform the corrupted error’s distribution into a desired one, or very close to it, in order to assist the linear adaptation process. We reveal important connections of the residual echo enhancement (REE) technique to other existing AEC and signal enhancement procedures, where the technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. Notably, the single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. Indeed, SBSS optimized via ICA leads to the system combination of the LMS algorithm with the ERN that allows continuous and stable adaptation even during double talk. Next, we extend the system perspective

  6. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  7. Modeling and Prediction of Monthly Total Ozone Concentrations by Use of an Artificial Neural Network Based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Surajit; Chattopadhyay, Goutami

    2012-10-01

    In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.

  8. Structured Functional Principal Component Analysis

    PubMed Central

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M.; Greven, Sonja

    2015-01-01

    Summary Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  9. Component protection based automatic control

    SciTech Connect

    Otaduy, P J

    1992-03-01

    Control and safety systems as well as operation procedures are designed on the basis of critical process parameters limits. The expectation is that short and long term mechanical damage and process failures will be avoided by operating the plant within the specified constraints envelopes. In this paper, one of the Advanced Liquid Metal Reactor (ALMR) design duty cycles events is discussed to corroborate that the time has come to explicitly make component protection part of the control system. Component stress assessment and aging data should be an integral part of the control system. Then transient trajectory planning and operating limits could be aimed at minimizing component specific and overall plant component damage cost functions. The impact of transients on critical components could then be managed according to plant lifetime design goals. The need for developing methodologies for online transient trajectory planning and assessment of operating limits in order to facilitate the explicit incorporation of damage assessment capabilities to the plant control and protection systems is discussed. 12 refs.

  10. Enantiomeric separation of isochromene derivatives by high-performance liquid chromatography using cyclodextrin based stationary phases and principal component analysis of the separation data.

    PubMed

    Nanayakkara, Yasith S; Woods, Ross M; Breitbach, Zachary S; Handa, Sachin; Slaughter, LeGrande M; Armstrong, Daniel W

    2013-08-30

    Isochromene derivatives are very important precursors in the natural products industry. Hence the enantiomeric separations of chiral isochromenes are important in the pharmaceutical industry and for organic asymmetric synthesis. Here we report enantiomeric separations of 21 different chiral isochromene derivatives, which were synthesized using alkynylbenzaldehyde cyclization catalyzed by chiral gold(I) acyclic diaminocarbene complexes. All separations were achieved by high-performance liquid chromatography with cyclodextrin based (Cyclobond) chiral stationary phases. Retention data of 21 chiral compounds and 14 other previously separated isochromene derivatives were analyzed using principal component analysis. The effect of the structure of the substituents on the isochromene ring on enantiomeric resolution as well as the other separation properties was analyzed in detail. Using principal component analysis it can be shown that the structural features that contribute to increased retention are different from those that enhance enantiomeric resolution. In addition, principal component analysis is useful for eliminating redundant factors from consideration when analyzing the effect of various chromatographic parameters. It was found that the chiral recognition mechanism is different for the larger γ-cyclodextrin as compared to the smaller β-cyclodextrin derivatives. Finally this specific system of chiral analytes and cyclodextrin based chiral selectors provides an effective format to examine the application of principal component analysis to enantiomeric separations using basic retention data and structural features. PMID:23906806

  11. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  12. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  13. Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  14. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    PubMed

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization.

  15. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    NASA Astrophysics Data System (ADS)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  16. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  17. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  18. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  19. The study of landscape stability in Yuli County by principal component analysis method based on RS and GIS technology

    NASA Astrophysics Data System (ADS)

    Wang, Qianfeng; Zhou, Kefa; Sun, Li; Chen, Limou; Ou, Yang; Li, Guangyu; Qin, Yanfang; Wang, Jinlin

    2011-02-01

    In order to evaluate quantitatively the landscape stability of arid areas, a study area was selected in Yuli county of the middle and lower reaches of Tarim river. Remote sensing image data are the main data sources, the image data are processed by the support of RS and GIS technology. The study extracted 11 indices of landscape stability by FRAGSTATS software, and the standard matrix of these indices data are got using Z-Score method, then the comprehensive evaluation model of landscape stability is constructed by principal component analysis method. The study results showed that the range of comprehensive evaluation scores of Yuli's ecological landscape stability is 1.736, which indicated there is a great variation in ecological landscape stability of study area. The stability declines as the following order: forest land - water area- grass land- cultivated land - buildup land -unused land. The landscape stability is always the key scientific issues which should be solved urgently, the study on landscape stability has important theoretical and practical significance.

  20. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  1. Skill Components of Task Analysis

    ERIC Educational Resources Information Center

    Adams, Anne E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices' problems with learning Hierarchical Task…

  2. The Components of Microbiological Risk Analysis

    PubMed Central

    Bentley, Stefano; Giacometti, Federica; Serraino, Andrea

    2015-01-01

    The paper describes the process of risk analysis in a food safety perspective. The steps of risk analysis defined as a process consisting of three interconnected components (risk assessment, risk management, and risk communication) are analysed. The different components of the risk assessment, risk management and risk communication are further described. PMID:27800384

  3. A reproducible analytical system based on the multi-component analysis of triterpene acids in Ganoderma lucidum.

    PubMed

    Da, Juan; Cheng, Chun-Ru; Yao, Shuai; Long, Hua-Li; Wang, Yan-Hong; Khan, Ikhlas A; Li, Yi-Feng; Wang, Qiu-Rong; Cai, Lu-Ying; Jiang, Bao-Hong; Liu, Xuan; Wu, Wan-Ying; Guo, De-An

    2015-06-01

    Ultra-performance liquid chromatography (UPLC) and Single Standard for Determination of Multi-Components (SSDMC) are becoming increasingly important for quality control of medicinal herbs; this approach was developed for Ganoderma lucidum. Special attention was necessary for the appropriate selection of markers, for determining the reproducibility of the relative retention times (RRT), and for the accuracy of conversion factors (F). Finally, ten components were determined, with ganoderic acid A serving as single standard. Stable system parameters were established, and with successful resolution of those issues, this analytical method could be used more broadly.

  4. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  5. Finite Element Based Stress Analysis of Graphite Component in High Temperature Gas Cooled Reactor Core Using Linear and Nonlinear Irradiation Creep Models

    SciTech Connect

    Mohanty, Subhasish; Majumdar, Saurindranath

    2015-01-01

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  6. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  7. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  8. Components of Task-Based Needs Analysis of the ESP Learners with the Specialization of Business and Tourism

    ERIC Educational Resources Information Center

    Poghosyan, Naira

    2016-01-01

    In the following paper we shall thoroughly analyze the target learning needs of the learners within an ESP (English for Specific Purposes) context. The main concerns of ESP have always been and remain with the needs analysis, text analysis and preparing learners to communicate effectively in the tasks prescribed by their study or work situation.…

  9. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  10. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome. PMID:24728666

  11. Component fragilities. Data collection, analysis and interpretation

    SciTech Connect

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists.

  12. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study.

    PubMed

    Kreuzer, Stefan W; Pourmoghaddam, Amir; Leffers, Kevin J; Johnson, Clint W; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation. PMID:26881086

  13. PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Wang, Weichen

    2016-01-01

    This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374

  14. Simultaneous analysis of 11 main active components in Cirsium setosum based on HPLC-ESI-MS/MS and combined with statistical methods.

    PubMed

    Sun, Qian; Chang, Lu; Ren, Yanping; Cao, Liang; Sun, Yingguang; Du, Yingfeng; Shi, Xiaowei; Wang, Qiao; Zhang, Lantong

    2012-11-01

    A novel method based on high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry was developed for simultaneous determination of the 11 major active components including ten flavonoids and one phenolic acid in Cirsium setosum. Separation was performed on a reversed-phase C(18) column with gradient elution of methanol and 0.1‰ acetic acid (v/v). The identification and quantification of the analytes were achieved on a hybrid quadrupole linear ion trap mass spectrometer. Multiple-reaction monitoring scanning was employed for quantification with switching electrospray ion source polarity between positive and negative modes in a single run. Full validation of the assay was carried out including linearity, precision, accuracy, stability, limits of detection and quantification. The results demonstrated that the method developed was reliable, rapid, and specific. The 25 batches of C. setosum samples from different sources were first determined using the developed method and the total contents of 11 analytes ranged from 1717.460 to 23028.258 μg/g. Among them, the content of linarin was highest, and its mean value was 7340.967 μg/g. Principal component analysis and hierarchical clustering analysis were performed to differentiate and classify the samples, which is helpful for comprehensive evaluation of the quality of C. setosum.

  15. Principal component analysis for the forensic discrimination of black inkjet inks based on the Vis-NIR fibre optics reflection spectra.

    PubMed

    Gál, Lukáš; Oravec, Michal; Gemeiner, Pavol; Čeppan, Michal

    2015-12-01

    Nineteen black inkjet inks of six different brands were examined by fibre optics reflection spectroscopy in Visible and Near Infrared Region (Vis-NIR FORS) directly on paper with a view to achieving good resolution between them. These different inks were tested on nineteen different inkjet printers from three brands. Samples were obtained from prints by reflection probe. Processed reflection spectra in the range 500-1000 nm were used as samples in principal component analysis. Variability between spectra of the same ink obtained from different prints, as well as between spectra of square areas and lines was examined. For both spectra obtained from square areas and lines reference, Principal Component Analysis (PCA) models were created. According to these models, the inkjet inks were divided into clusters. PCA method is able to separate inks containing carbon black as main colorant from the other inks using other colorants. Some spectra were recorded from another piece of printer and used as validation samples. Spectra of validation samples were projected onto reference PCA models. According to position of validation samples in score plots it can be concluded that PCA based on Vis-NIR FORS can reliably differentiate inkjet inks which are included in the reference database. The presented method appears to be a suitable tool for forensic examination of questioned documents containing inkjet inks. Inkjet inks spectra were obtained without extraction or cutting sample with possibility to measure out of the laboratory.

  16. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  17. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  18. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  19. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  20. Principal component analysis based interconversion between infrared and near-infrared spectra for the study of thermal-induced weak interaction changes of poly(N-isopropylacrylamide).

    PubMed

    Zhang, Liping; Noda, Isao; Wu, Yuqing

    2009-06-01

    The use of a novel spectral interconversion scheme, principal component analysis (PCA) based spectral prediction, to probe weak molecular interactions of a polymer film is reported. A PCA model is built based on a joint data matrix by concatenating two related spectral data matrices (such as infrared (IR) and near-infrared (NIR) spectra) along the variable direction, then the obtained loading matrix of the model is split into two parts to predict the desired spectra. For a better PCA-based prediction, it is suggested that the samples whose spectra are to be predicted should be as similar as possible to those used in the model. Based on the PCA model, the thermal-induced changes in the weak interaction of poly(N-isopropylacrylamide) (PNiPA) film is revealed by the interconversion between selected spectral ranges measured between 40 and 220 degrees C. The thermal-induced weak interaction changes of PNiPA, expressed as either the band shift or intensity changes at a specific region, have been probed properly. Meanwhile, the robustness of the spectral prediction is also compared with that achieved by a partial least squares (PLS2) model in detail, illustrating its advantages in predicting more subtle structural changes such as C-H groups.

  1. Identification of Tea Storage Times by Linear Discrimination Analysis and Back-Propagation Neural Network Techniques Based on the Eigenvalues of Principal Components Analysis of E-Nose Sensor Signals

    PubMed Central

    Yu, Huichun; Wang, Yongwei; Wang, Jun

    2009-01-01

    An electronic nose (E-nose) was employed to detect the aroma of green tea after different storage times. Longjing green tea dry leaves, beverages and residues were detected with an E-nose, respectively. In order to decrease the data dimensionality and optimize the feature vector, the E-nose sensor response data were analyzed by principal components analysis (PCA) and the five main principal components values were extracted as the input for the discrimination analysis. The storage time (0, 60, 120, 180 and 240 days) was better discriminated by linear discrimination analysis (LDA) and was predicted by the back-propagation neural network (BPNN) method. The results showed that the discrimination and testing results based on the tea leaves were better than those based on tea beverages and tea residues. The mean errors of the tea leaf data were 9, 2.73, 3.93, 6.33 and 6.8 days, respectively. PMID:22408494

  2. Selection of principal components based on Fisher discriminant ratio

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyan; Naghedolfeizi, Masoud; Arora, Sanjeev; Yousif, Nabil; Aberra, Dawit

    2016-05-01

    Principal component analysis transforms a set of possibly correlated variables into uncorrelated variables, and is widely used as a technique of dimensionality reduction and feature extraction. In some applications of dimensionality reduction, the objective is to use a small number of principal components to represent most variation in the data. On the other hand, the main purpose of feature extraction is to facilitate subsequent pattern recognition and machine learning tasks, such as classification. Selecting principal components for classification tasks aims for more than dimensionality reduction. The capability of distinguishing different classes is another major concern. Components that have larger eigenvalues do not necessarily have better distinguishing capabilities. In this paper, we investigate a strategy of selecting principal components based on the Fisher discriminant ratio. The ratio of between class variance to within class variance is calculated for each component, based on which the principal components are selected. The number of relevant components is determined by the classification accuracy. To alleviate overfitting which is common when there are few training data available, we use a cross-validation procedure to determine the number of principal components. The main objective is to select the components that have large Fisher discriminant ratios so that adequate class separability is obtained. The number of selected components is determined by the classification accuracy of the validation data. The selection method is evaluated by face recognition experiments.

  3. Principal component analysis implementation in Java

    NASA Astrophysics Data System (ADS)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  4. Principal component analysis of phenolic acid spectra

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  5. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  6. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  7. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  8. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  9. Two stage principal component analysis of color.

    PubMed

    Lenz, Reiner

    2002-01-01

    We introduce a two-stage analysis of color spectra. In the first processing stage, correlation with the first eigenvector of a spectral database is used to measure the intensity of a color spectrum. In the second step, a perspective projection is used to map the color spectrum to the hyperspace of spectra with first eigenvector coefficient equal to unity. The location in this hyperspace describes the chromaticity of the color spectrum. In this new projection space, a second basis of eigenvectors is computed and the projected spectrum is described by the expansion in this chromaticity basis. This description is possible since the space of color spectra is conical. We compare this two-stage process with traditional principal component analysis and find that the results of the new structure are closer to the structure of traditional chromaticity descriptors than traditional principal component analysis.

  10. Principal Component Analysis With Sparse Fused Loadings

    PubMed Central

    Guo, Jian; James, Gareth; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    In this article, we propose a new method for principal component analysis (PCA), whose main objective is to capture natural “blocking” structures in the variables. Further, the method, beyond selecting different variables for different components, also encourages the loadings of highly correlated variables to have the same magnitude. These two features often help in interpreting the principal components. To achieve these goals, a fusion penalty is introduced and the resulting optimization problem solved by an alternating block optimization algorithm. The method is applied to a number of simulated and real datasets and it is shown that it achieves the stated objectives. The supplemental materials for this article are available online. PMID:25878487

  11. PCA: Principal Component Analysis for spectra modeling

    NASA Astrophysics Data System (ADS)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  12. ARTICLES: Laser spectrochromatographic analysis of petroleum components

    NASA Astrophysics Data System (ADS)

    Korobeĭnik, G. S.; Letokhov, V. S.; Montanari, S. G.; Tumanova, L. M.

    1985-01-01

    A system combining a gas chromatograph and a laser optoacoustic spectrometer (with a CO2 laser and means for fast frequency scanning) was used to investigate model hydrocarbon mixtures, as well as some real objects in the form of benzine fractions of petroleum oil. The fast scanning regime was used to record optoacoustic spectra of hydrocarbons (in the range 9.2-10.8μ) during the travel time (1-10 sec) of the individual components of a mixture through an optoacoustic cell in the course of chromatrographic separation of these components. The spectra were used to carry out a group hydrocarbon analysis of benzine fractions of petroleum oil from various locations. The proposed method was relatively fast and was characterized by a good ability for identification of various components, compared with the usually employed method such as gas-liquid capillary chromatography.

  13. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  14. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  15. [Assessment of aquatic ecosystem health based on principal component analysis with entropy weight: a case study of Wanning Reservoir (Hainan Island, China)].

    PubMed

    Xie, Fei; Gu, Ji-Guang; Lin, Zhang-Wen

    2014-06-01

    A new assessment method based on principal component analysis (PCA) and entropy weight for ecosystem health was applied to Wanning Reservoir, Hainan Island, China to investigate whether the new method could solve the overlap in weighting which existed in the traditional entropy weight-based method for ecosystem health. The results showed that, the ecosystem health status of Wanning Reservoir showed an improvement trend overall from 2010 to 2012; the means of ecosystem health comprehensive index (EHCI) in each year were 0.534, 0.617, 0.634 for 2010, 2011 and 2012 respectively, and the ecosystem health status was III (medium), II (good), and II (good), respectively. In addition, the ecosystem health status of the reservoir displayed a weak seasonal variation. The variation of EHCI became smaller recently, showing that Wanning Reservoir tended to be relatively stable. Comparison of the weight of indices in the new and the traditional methods indicated that, the cumulative weight of the four indices (i. e., DO, COD, BOD, and NH(4+)-N) had a stronger correlation of 0.382 for the traditional one than that (0.178) for the new method. It suggested the application of PCA with entropy could avoid the overlap in weighting effectively. In addition, the correlation analysis between the trophic status index and EHCI showed significant negative correlation (P < 0.05), indicating that the new method based on PCA with entropy weight could improve not only the assignment of weighting but also the accuracy of the results. The new method here is suitable for evaluating ecosystem health of the reservoir.

  16. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  17. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  18. Principal components analysis of Jupiter VIMS spectra

    USGS Publications Warehouse

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  19. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  20. Spectral Components Analysis of Diffuse Emission Processes

    SciTech Connect

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  1. WE-G-18C-09: Separating Perfusion and Diffusion Components From Diffusion Weighted MRI of Rectum Tumors Based On Intravoxel Incoherent Motion (IVIM) Analysis

    SciTech Connect

    Tyagi, N; Wengler, K; Mazaheri, Y; Hunt, M; Deasy, J; Gollub, M

    2014-06-15

    Purpose: Pseudodiffusion arises from the microcirculation of blood in the randomly oriented capillary network and contributes to the signal decay acquired using a multi-b value diffusion weighted (DW)-MRI sequence. This effect is more significant at low b-values and should be properly accounted for in apparent diffusion coefficient (ADC) calculations. The purpose of this study was to separate perfusion and diffusion component based on a biexponential and a segmented monoexponential model using IVIM analysis Methods. The signal attenuation is modeled as S(b) = S0[(1−f)exp(−bD) + fexp(−bD*)]. Fitting the biexponetial decay leads to the quantification of D, the true diffusion coefficient, D*, the pseudodiffusion coefficient, and f, the perfusion fraction. A nonlinear least squares fit and two segmented monoexponential models were used to derive the values for D, D*,‘and f. In the segmented approach b = 200 s/mm{sup 2} was used as the cut-off value for calculation of D. DW-MRI's of a rectum cancer patient were acquired before chemotherapy, before radiation therapy (RT), and 4 weeks into RT and were investigated as an example case. Results: Mean ADC for the tumor drawn on the DWI cases was 0.93, 1.0 and 1.13 10{sup −3}×mm{sup 2}/s before chemotherapy, before RT and 4 weeks into RT. The mean (D.10{sup −3} × mm{sup 2}/s, D* 10{sup −3} × mm{sup 2}/s, and f %) based on biexponential fit was (0.67, 18.6, and 27.2%), (0.72, 17.7, and 28.9%) and (0.83,15.1, and 30.7%) at these time points. The mean (D, D* f) based on segmented fit was (0.72, 10.5, and 12.1%), (0.72, 8.2, and 17.4%) and (.82, 8.1, 16.5%) Conclusion: ADC values are typically higher than true diffusion coefficients. For tumors with significant perfusion effect, ADC should be analyzed at higher b-values or separated from the perfusion component. Biexponential fit overestimates the perfusion fraction because of increased sensitivity to noise at low b-values.

  2. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  3. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  4. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  5. Automated resolution of chromatographic signals by independent component analysis-orthogonal signal deconvolution in comprehensive gas chromatography/mass spectrometry-based metabolomics.

    PubMed

    Domingo-Almenara, Xavier; Perera, Alexandre; Ramírez, Noelia; Brezmes, Jesus

    2016-07-01

    Comprehensive gas chromatography-mass spectrometry (GC×GC-MS) provides a different perspective in metabolomics profiling of samples. However, algorithms for GC×GC-MS data processing are needed in order to automatically process the data and extract the purest information about the compounds appearing in complex biological samples. This study shows the capability of independent component analysis-orthogonal signal deconvolution (ICA-OSD), an algorithm based on blind source separation and distributed in an R package called osd, to extract the spectra of the compounds appearing in GC×GC-MS chromatograms in an automated manner. We studied the performance of ICA-OSD by the quantification of 38 metabolites through a set of 20 Jurkat cell samples analyzed by GC×GC-MS. The quantification by ICA-OSD was compared with a supervised quantification by selective ions, and most of the R(2) coefficients of determination were in good agreement (R(2)>0.90) while up to 24 cases exhibited an excellent linear relation (R(2)>0.95). We concluded that ICA-OSD can be used to resolve co-eluted compounds in GC×GC-MS. PMID:27208528

  6. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  7. Automated resolution of chromatographic signals by independent component analysis-orthogonal signal deconvolution in comprehensive gas chromatography/mass spectrometry-based metabolomics.

    PubMed

    Domingo-Almenara, Xavier; Perera, Alexandre; Ramírez, Noelia; Brezmes, Jesus

    2016-07-01

    Comprehensive gas chromatography-mass spectrometry (GC×GC-MS) provides a different perspective in metabolomics profiling of samples. However, algorithms for GC×GC-MS data processing are needed in order to automatically process the data and extract the purest information about the compounds appearing in complex biological samples. This study shows the capability of independent component analysis-orthogonal signal deconvolution (ICA-OSD), an algorithm based on blind source separation and distributed in an R package called osd, to extract the spectra of the compounds appearing in GC×GC-MS chromatograms in an automated manner. We studied the performance of ICA-OSD by the quantification of 38 metabolites through a set of 20 Jurkat cell samples analyzed by GC×GC-MS. The quantification by ICA-OSD was compared with a supervised quantification by selective ions, and most of the R(2) coefficients of determination were in good agreement (R(2)>0.90) while up to 24 cases exhibited an excellent linear relation (R(2)>0.95). We concluded that ICA-OSD can be used to resolve co-eluted compounds in GC×GC-MS.

  8. Impact of parameter fluctuations on the performance of ethanol precipitation in production of Re Du Ning Injections, based on HPLC fingerprints and principal component analysis.

    PubMed

    Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei

    2016-01-01

    The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products.

  9. Structural analysis methods development for turbine hot section components

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1989-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine HOT Section Technoloogy (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  10. Practical Issues in Component Aging Analysis

    SciTech Connect

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  11. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  12. Compound fault diagnosis of gearboxes based on GFT component extraction

    NASA Astrophysics Data System (ADS)

    Ou, Lu; Yu, Dejie

    2016-11-01

    Compound fault diagnosis of gearboxes is of great importance to the long-term safe operation of rotating machines, and the key is to separate different fault components. In this paper, the path graph is introduced into the vibration signal analysis and the graph Fourier transform (GFT) of vibration signals are investigated from the graph spectrum domain. To better extract the fault components in gearboxes, a new adjacency weight matrix is defined and then the GFT of simulation signals of the gear and the bearing with localized faults are analyzed. Further, since the GFT graph spectrum of the gear fault component and the bearing fault component are mainly distributed in the low-order region and the high-order region, respectively, a novel method for the compound fault diagnosis of gearboxes based on GFT component extraction is proposed. In this method, the nonzero ratios, which are introduced to analyze the eigenvectors auxiliary, and the GFT of a gearbox vibration signal, are firstly calculated. Then, the order thresholds for reconstructed fault components are determined and the fault components are extracted. Finally, the Hilbert demodulation analyses are conducted. According to the envelope spectra of the fault components, the faults of the gear and the bearing can be diagnosed respectively. The performance of the proposed method is validated by the simulation data and the experiment signals from a gearbox with compound faults.

  13. New approach based on fuzzy logic and principal component analysis for the classification of two-dimensional maps in health and disease. Application to lymphomas.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Righetti, Pier Giorgio; Antonucci, Francesca

    2003-07-01

    Two-dimensional (2D) electrophoresis is the most wide spread technique for the separation of proteins in biological systems. This technique produces 2D maps of high complexity, which creates difficulties in the comparison of different samples. The method proposed in this paper for the comparison of different 2D maps can be summarised in four steps: (a) digitalisation of the image; (b) fuzzyfication of the digitalised map in order to consider the variability of the two-dimensional electrophoretic separation; (c) decoding by principal component analysis of the previously obtained fuzzy maps, in order to reduce the system dimensionality; (d) classification analysis (linear discriminant analysis), in order to separate the samples contained in the dataset according to the classes present in said dataset. This method was applied to a dataset constituted by eight samples: four belonging to healthy human lymph-nodes and four deriving from non-Hodgkin lymphomas. The amount of fuzzyfication of the original map is governed by the sigma parameter. The larger the value, the more fuzzy theresulting transformed map. The effect of the fuzzyfication parameter was investigated, the optimal results being obtained for sigma = 1.75 and 2.25. Principal component analysis and linear discriminant analysis allowed the separation of the two classes of samples without any misclassification. PMID:12929957

  14. Recursive approach of EEG-segment-based principal component analysis substantially reduces cryogenic pump artifacts in simultaneous EEG-fMRI data.

    PubMed

    Kim, Hyun-Chul; Yoo, Seung-Schik; Lee, Jong-Hwan

    2015-01-01

    Electroencephalography (EEG) data simultaneously acquired with functional magnetic resonance imaging (fMRI) data are preprocessed to remove gradient artifacts (GAs) and ballistocardiographic artifacts (BCAs). Nonetheless, these data, especially in the gamma frequency range, can be contaminated by residual artifacts produced by mechanical vibrations in the MRI system, in particular the cryogenic pump that compresses and transports the helium that chills the magnet (the helium-pump). However, few options are available for the removal of helium-pump artifacts. In this study, we propose a recursive approach of EEG-segment-based principal component analysis (rsPCA) that enables the removal of these helium-pump artifacts. Using the rsPCA method, feature vectors representing helium-pump artifacts were successfully extracted as eigenvectors, and the reconstructed signals of the feature vectors were subsequently removed. A test using simultaneous EEG-fMRI data acquired from left-hand (LH) and right-hand (RH) clenching tasks performed by volunteers found that the proposed rsPCA method substantially reduced helium-pump artifacts in the EEG data and significantly enhanced task-related gamma band activity levels (p=0.0038 and 0.0363 for LH and RH tasks, respectively) in EEG data that have had GAs and BCAs removed. The spatial patterns of the fMRI data were estimated using a hemodynamic response function (HRF) modeled from the estimated gamma band activity in a general linear model (GLM) framework. Active voxel clusters were identified in the post-/pre-central gyri of motor area, only from the rsPCA method (uncorrected p<0.001 for both LH/RH tasks). In addition, the superior temporal pole areas were consistently observed (uncorrected p<0.001 for the LH task and uncorrected p<0.05 for the RH task) in the spatial patterns of the HRF model for gamma band activity when the task paradigm and movement were also included in the GLM.

  15. Resting-State Functional Connectivity by Independent Component Analysis-Based Markers Corresponds to Areas of Initial Seizure Propagation Established by Prior Modalities from the Hypothalamus

    PubMed Central

    Wilfong, Angus A.; Curry, Daniel J.

    2016-01-01

    Abstract The aims of this study were to evaluate a clinically practical functional connectivity (fc) protocol designed to blindly identify the corresponding areas of initial seizure propagation and also to differentiate these areas from remote secondary areas affected by seizure. The patients in this cohort had intractable epilepsy caused by intrahypothalamic hamartoma, which is the location of the ictal focus. The ictal propagation pathway is homogeneous and established, thus creating the optimum situation for the proposed method validation study. Twelve patients with seizures from hypothalamic hamartoma and six normal control patients underwent resting-state functional MRI, using independent component analysis (ICA) to identify network differences in patients. This was followed by seed-based connectivity measures to determine the extent of fc derangement between hypothalamus and these areas. The areas with significant change in connectivity were compared with the results of prior studies' modalities used to evaluate seizure propagation. The left amygdala-parahippocampal gyrus area, cingulate gyrus, and occipitotemporal gyrus demonstrated the highest derangement in connectivity with the hypothalamus, p < 0.01, corresponding to the initial seizure propagation areas established by prior modalities. Areas of secondary ictal propagation were differentiated from these initial locations by first being identified as an abnormal neuronal signal source through ICA, but did not show significant connectivity directly with the known ictal focus. Noninvasive connectivity measures correspond to areas of initial ictal propagation and differentiate such areas from secondary ictal propagation, which may aid in ictal focus surgical disconnection planning and support the use of this newer modality for adjunctive information in epilepsy surgery evaluation. PMID:27503346

  16. Multi-component separation and analysis of bat echolocation calls.

    PubMed

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  17. Fetal magnetocardiographic mapping using independent component analysis.

    PubMed

    Comani, S; Mantini, D; Alleva, G; Di Luzio, S; Romani, G L

    2004-12-01

    Fetal magnetocardiography (fMCG) is the only noninvasive technique allowing effective assessment of fetal cardiac electrical activity during the prenatal period. The reconstruction of reliable magnetic field mapping associated with fetal heart activity would allow three-dimensional source localization. The efficiency of independent component analysis (ICA) in restoring reliable fetal traces from multichannel fMCG has already been demonstrated. In this paper, we describe a method of reconstructing a complete set of fetal signals hidden in multichannel fMCG preserving their correct spatial distribution, waveform, polarity and amplitude. Fetal independent components, retrieved with an ICA algorithm (FastICA), were interpolated (fICI method) using information gathered during FastICA iterations. The restored fetal signals were used to reconstruct accurate magnetic mapping for every millisecond during the average beat. The procedure was validated on fMCG recorded from the 22nd gestational week onward with a multichannel MCG system working in a shielded room. The interpolated traces were compared with those obtained with a standard technique, and the consistency of fetal mapping was checked evaluating source localizations relative to fetal echocardiographic information. Good magnetic field distributions during the P-QRS-T waves were attained with fICI for all gestational periods; their reliability was confirmed by three-dimensional source localizations. PMID:15712724

  18. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  19. Principal Components Analysis In Medical Imaging

    NASA Astrophysics Data System (ADS)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  20. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  1. Online kernel principal component analysis: a reduced-order model.

    PubMed

    Honeine, Paul

    2012-09-01

    Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.

  2. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-02-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in southern Ontario (ON), Canada, were used to evaluate Environment Canada's regional chemical transport model predictions of primary organic aerosol (POA). Environment Canada's operational numerical weather prediction model and the 2006 Canadian and 2005 US national emissions inventories were used as input to the chemical transport model (named AURAMS). Particle-component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. Co-located carbon monoxide (CO), PM2.5 black carbon (BC), and PM1 SO4 measurements were also used for evaluation and interpretation, permitting a detailed diagnostic model evaluation. At the urban site, good agreement was observed for the comparison of daytime campaign PM1 POA and HOA mean values: 1.1 μg m-3 vs. 1.2 μg m-3, respectively. However, a POA overprediction was evident on calm nights due to an overly-stable model surface layer. Biases in model POA predictions trended from positive to negative with increasing HOA values. This trend has several possible explanations, including (1) underweighting of urban locations in particulate matter (PM) spatial surrogate fields, (2) overly-coarse model grid spacing for resolving urban-scale sources, and (3) lack of a model particle POA evaporation process during dilution of vehicular POA tail-pipe emissions to urban scales. Furthermore, a trend in POA bias was observed at the urban site as a function of the BC/HOA ratio, suggesting a possible association of POA underprediction for diesel combustion sources. For several time periods, POA overprediction was also observed for sulphate-rich plumes, suggesting that our model POA fractions for the PM2.5 chemical speciation profiles may be too high for these point sources. At the rural Harrow site

  3. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-09-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA). Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species. At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m-3 and 1.2 μg m-3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer. This model behaviour can be explained by a combination of model under

  4. Quantitative Analysis of PMLA Nanoconjugate Components after Backbone Cleavage

    PubMed Central

    Ding, Hui; Patil, Rameshwar; Portilla-Arias, Jose; Black, Keith L.; Ljubimova, Julia Y.; Holler, Eggehard

    2015-01-01

    Multifunctional polymer nanoconjugates containing multiple components show great promise in cancer therapy, but in most cases complete analysis of each component is difficult. Polymalic acid (PMLA) based nanoconjugates have demonstrated successful brain and breast cancer treatment. They consist of multiple components including targeting antibodies, Morpholino antisense oligonucleotides (AONs), and endosome escape moieties. The component analysis of PMLA nanoconjugates is extremely difficult using conventional spectrometry and HPLC method. Taking advantage of the nature of polyester of PMLA, which can be cleaved by ammonium hydroxide, we describe a method to analyze the content of antibody and AON within nanoconjugates simultaneously using SEC-HPLC by selectively cleaving the PMLA backbone. The selected cleavage conditions only degrade PMLA without affecting the integrity and biological activity of the antibody. Although the amount of antibody could also be determined using the bicinchoninic acid (BCA) method, our selective cleavage method gives more reliable results and is more powerful. Our approach provides a new direction for the component analysis of polymer nanoconjugates and nanoparticles. PMID:25894227

  5. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106

  6. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach.

  7. Independent component analysis for noisy data--MEG data analysis.

    PubMed

    Ikeda, S; Toyama, K

    2000-12-01

    Independent component analysis (ICA) is a new, simple and powerful idea for analyzing multi-variant data. One of the successful applications is neurobiological data analysis such as electroencephalography (EEG), magnetic resonance imaging (MRI), and magnetoencephalography (MEG). However, many problems remain. In most cases, neurobiological data contain a lot of sensor noise, and the number of independent components is unknown. In this article, we discuss an approach to separate noise-contaminated data without knowing the number of independent components. A well-known two stage approach to ICA is to pre-process the data by principal component analysis (PCA), and then the necessary rotation matrix is estimated. Since PCA does not work well for noisy data, we implement a factor analysis model for pre-processing. In the new pre-processing, the number of sources and the amount of sensor noise are estimated. After the preprocessing, the rotation matrix is estimated using an ICA method. Through the experiments with MEG data, we show this approach is effective.

  8. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    NASA Astrophysics Data System (ADS)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  9. Meta-Analysis of Mathematic Basic-Fact Fluency Interventions: A Component Analysis

    ERIC Educational Resources Information Center

    Codding, Robin S.; Burns, Matthew K.; Lukito, Gracia

    2011-01-01

    Mathematics fluency is a critical component of mathematics learning yet few attempts have been made to synthesize this research base. Seventeen single-case design studies with 55 participants were reviewed using meta-analytic procedures. A component analysis of practice elements was conducted and treatment intensity and feasibility were examined.…

  10. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  11. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  12. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  13. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  14. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  15. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  16. Component design bases - A template approach

    SciTech Connect

    Pabst, L.F. ); Strickland, K.M. )

    1991-01-01

    A well-documented nuclear plant design basis can enhance plant safety and availability. Older plants, however, often lack historical evidence of the original design intent, particularly for individual components. Most plant documentation describes the actual design (what is) rather than the bounding limits of the design. Without knowledge of these design limits, information from system descriptions and equipment specifications is often interpreted as inviolate design requirements. Such interpretations may lead to unnecessary design conservatism in plant modifications and unnecessary restrictions on plant operation. In 1986, Florida Power and Light Company's (FP and L's) Turkey Point plant embarked on one of the first design basis reconstitution programs in the United States to catalog the true design requirements. As the program developed, design basis users expressed a need for additional information at the component level. This paper outlines a structured (template) approach to develop useful component design basis information (including the WHYs behind the design).

  17. Application of independent component analysis for beam diagnosis

    SciTech Connect

    Huang, X.; Lee, S.Y.; Prebys, Eric; Tomlin, Ray; /Fermilab

    2005-05-01

    The independent component analysis (ICA) is applied to analyze simultaneous multiple turn-by-turn beam position monitor (BPM) data of synchrotrons. The sampled data are decomposed to physically independent source signals, such as betatron motion, synchrotron motion and other perturbation sources. The decomposition is based on simultaneous diagonalization of several unequal time covariance matrices, unlike the model independent analysis (MIA), which uses equal-time covariance matrix only. Consequently the new method has advantage over MIA in isolating the independent modes and is more robust under the influence of contaminating signals of bad BPMs. The spatial pattern and temporal pattern of each resulting component (mode) can be used to identify and analyze the associated physical cause. Beam optics can be studied on the basis of the betatron modes. The method has been successfully applied to the Booster Synchrotron at Fermilab.

  18. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  19. EXAFS and principal component analysis : a new shell game.

    SciTech Connect

    Wasserman, S.

    1998-10-28

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions.

  20. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  1. Effect of the components' interface on the synthesis of methanol over Cu/ZnO from CO2/H2: a microkinetic analysis based on DFT + U calculations.

    PubMed

    Tang, Qian-Lin; Zou, Wen-Tian; Huang, Run-Kun; Wang, Qi; Duan, Xiao-Xuan

    2015-03-21

    The elucidation of chemical reactions occurring on composite systems (e.g., copper (Cu)/zincite (ZnO)) from first principles is a challenging task because of their very large sizes and complicated equilibrium geometries. By combining the density functional theory plus U (DFT + U) method with microkinetic modeling, the present study has investigated the role of the phase boundary in CO2 hydrogenation to methanol over Cu/ZnO. The absence of hydrogenation locations created by the interface between the two catalyst components was revealed based on the calculated turnover frequency under realistic conditions, in which the importance of interfacial copper to provide spillover hydrogen for remote Cu(111) sites was stressed. Coupled with the fact that methanol production on the binary catalyst was recently believed to predominantly involve the bulk metallic surface, the spillover of interface hydrogen atoms onto Cu(111) facets facilitates the production process. The cooperative influence of the two different kinds of copper sites can be rationalized applying the Brönsted-Evans-Polanyi (BEP) relationship and allows us to find that the catalytic activity of ZnO-supported Cu catalysts is of volcano type with decrease in the particle size. Our results here may have useful implications in the future design of new Cu/ZnO-based materials for CO2 transformation to methanol.

  2. Effect of the components' interface on the synthesis of methanol over Cu/ZnO from CO2/H2: a microkinetic analysis based on DFT + U calculations.

    PubMed

    Tang, Qian-Lin; Zou, Wen-Tian; Huang, Run-Kun; Wang, Qi; Duan, Xiao-Xuan

    2015-03-21

    The elucidation of chemical reactions occurring on composite systems (e.g., copper (Cu)/zincite (ZnO)) from first principles is a challenging task because of their very large sizes and complicated equilibrium geometries. By combining the density functional theory plus U (DFT + U) method with microkinetic modeling, the present study has investigated the role of the phase boundary in CO2 hydrogenation to methanol over Cu/ZnO. The absence of hydrogenation locations created by the interface between the two catalyst components was revealed based on the calculated turnover frequency under realistic conditions, in which the importance of interfacial copper to provide spillover hydrogen for remote Cu(111) sites was stressed. Coupled with the fact that methanol production on the binary catalyst was recently believed to predominantly involve the bulk metallic surface, the spillover of interface hydrogen atoms onto Cu(111) facets facilitates the production process. The cooperative influence of the two different kinds of copper sites can be rationalized applying the Brönsted-Evans-Polanyi (BEP) relationship and allows us to find that the catalytic activity of ZnO-supported Cu catalysts is of volcano type with decrease in the particle size. Our results here may have useful implications in the future design of new Cu/ZnO-based materials for CO2 transformation to methanol. PMID:25697118

  3. Multi-component joint analysis of surface waves

    NASA Astrophysics Data System (ADS)

    Dal Moro, Giancarlo; Moura, Rui Miguel Marques; Moustafa, Sayed S. R.

    2015-08-01

    Propagation of surface waves can occur with complex energy distribution amongst the various modes. It is shown that even simple VS (shear-wave velocity) profiles can generate velocity spectra that, because of a complex mode excitation, can be quite difficult to interpret in terms of modal dispersion curves. In some cases, Rayleigh waves show relevant differences depending on the considered component (radial or vertical) and the kind of source (vertical impact or explosive). Contrary to several simplistic assumptions often proposed, it is shown, both via synthetic and field datasets, that the fundamental mode of Rayleigh waves can be almost completely absent. This sort of evidence demonstrates the importance of a multi-component analysis capable of providing the necessary elements to properly interpret the data and adequately constrain the subsurface model. It is purposely shown, also through the sole use of horizontal geophones, how it can be possible to efficiently and quickly acquire both Love and Rayleigh (radial-component) waves. The presented field dataset reports a case where Rayleigh waves (both their vertical and radial components) appear largely dominated by higher modes with little or no evidence of the fundamental mode. The joint inversion of the radial and vertical components of Rayleigh waves jointly with Love waves is performed by adopting a multi-objective inversion scheme based on the computation of synthetic seismograms for the three considered components and the minimization of the whole velocity spectra misfits (Full Velocity Spectra - FVS - inversion). Such a FVS multi-component joint inversion can better handle complex velocity spectra thus providing a more robust subsurface model not affected by erroneous velocity spectra interpretations and non-uniqueness of the solution.

  4. A Component Analysis of Marriage Enrichment.

    ERIC Educational Resources Information Center

    Buston, Beverley G.; And Others

    Although marriage enrichment programs have been shown to be effective for many couples, a multidimensional approach to assessment is needed in investigating these groups. The components of information and social support in successful marriage enrichment programs were compared in a completely crossed 2 x 2 factorial design with repeated measures.…

  5. Two-component signal transduction in Agaricus bisporus: a comparative genomic analysis with other basidiomycetes through the web-based tool BASID2CS.

    PubMed

    Lavín, José L; García-Yoldi, Alberto; Ramírez, Lucía; Pisabarro, Antonio G; Oguiza, José A

    2013-06-01

    Two-component systems (TCSs) are signal transduction mechanisms present in many eukaryotes, including fungi that play essential roles in the regulation of several cellular functions and responses. In this study, we carry out a genomic analysis of the TCS proteins in two varieties of the white button mushroom Agaricus bisporus. The genomes of both A. bisporus varieties contain eight genes coding for TCS proteins, which include four hybrid Histidine Kinases (HKs), a single histidine-containing phosphotransfer (HPt) protein and three Response Regulators (RRs). Comparison of the TCS proteins among A. bisporus and the sequenced basidiomycetes showed a conserved core complement of five TCS proteins including the Tco1/Nik1 hybrid HK, HPt protein and Ssk1, Skn7 and Rim15-like RRs. In addition, Dual-HKs, unusual hybrid HKs with 2 HK and 2 RR domains, are absent in A. bisporus and are limited to various species of basidiomycetes. Differential expression analysis showed no significant up- or down-regulation of the Agaricus TCS genes in the conditions/tissue analyzed with the exception of the Skn7-like RR gene (Agabi_varbisH97_2|198669) that is significantly up-regulated on compost compared to cultured mycelia. Furthermore, the pipeline web server BASID2CS (http://bioinformatics.unavarra.es:1000/B2CS/BASID2CS.htm) has been specifically designed for the identification, classification and functional annotation of putative TCS proteins from any predicted proteome of basidiomycetes using a combination of several bioinformatic approaches.

  6. Discrimination of rapeseed and weeds under actual field conditions based on principal component analysis and artificial neural network by VIS/NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Min; Bao, Yidan; He, Yong

    2007-11-01

    The study documented successful discrimination between five weed species and rapeseed plants under actual field conditions using visible and near infrared (Vis/NIR) spectroscopy. A hybrid recognition model, BP artificial neural networks (BP-ANN) combined with principal component analysis (PCA), had been established for discrimination of weeds in rapeseed field. Spectra tests were performed on the rapeseed and five-weed species canopy of 180 samples in the field using a spectrophotometer (325-1075 nm). 6 optimal PCs were selected as the input of BP neural networks to build the prediction model. Rapeseed samples were marked as 1, while the five weed species marked as 2, 3, 4, 5, 6, which were used as output set of BP-ANN. 120 samples were randomly selected as the training set, and the remainder as prediction set. It showed excellent predictions with the correlation value of 0.9745, and the relative standard deviation (RSD) was under 5% thus 100% of prediction accuracy was achieved. The results are promising for further work in real-time identification of weed patches in rapeseed fields for precision weed management.

  7. Quantitative analysis of planetary reflectance spectra with principal components analysis

    NASA Technical Reports Server (NTRS)

    Johnson, P. E.; Smith, M. O.; Adams, J. B.

    1985-01-01

    A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene. These endmembers were chosen from a similar analysis of 92 lunar soil and rock samples. The models fit the data to within 2 percent rms. It is found that the goodness of fit is marginally better for intimate mixing over macroscopic mixing.

  8. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  9. Quantitative analysis of planetary reflectance spectra with principal components analysis

    NASA Astrophysics Data System (ADS)

    Johnson, P. E.; Smith, M. O.; Adams, J. B.

    1985-02-01

    A technique is presented for quantitative analysis of planetary reflectance spectra as mixtures of particles on microscopic and macroscopic scales using principal components analysis. This technique allows for determination of the endmembers being mixed, their abundance, and the scale of mixing, as well as other physical parameters. Eighteen lunar telescopic reflectance spectra of the Copernicus crater region, from 600 nm to 1800 nm in wavelength, are modeled in terms of five likely endmembers: mare basalt, mature mare soil, anorthosite, mature highland soil, and clinopyroxene. These endmembers were chosen from a similar analysis of 92 lunar soil and rock samples. The models fit the data to within 2 percent rms. It is found that the goodness of fit is marginally better for intimate mixing over macroscopic mixing.

  10. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension.

  11. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children

    PubMed Central

    Wassenburg, Stephanie I.; de Koning, Björn B.; de Vries, Meinou H.; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  12. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  13. Sea surface temperature patterns in Tropical Atlantic: principal component analysis and nonlinear principal component analysis

    NASA Astrophysics Data System (ADS)

    Kenfack, S. C.; Mkankam, K. F.; Alory, G.; du Penhoat, Y.; Hounkonnou, N. M.; Vondou, D. A.; Bawe, G. N.

    2014-03-01

    Principal Component Analysis (PCA) is one of the popular statistical methods for feature extraction. The neural network model has been performed on the PCA to obtain nonlinear principal component analysis (NLPCA), which allows the extraction of nonlinear features in the dataset missed by the PCA. NLPCA is applied to the monthly Sea Surface Temperature (SST) data from the eastern tropical Atlantic Ocean (29° W-21° E, 25° S-7° N) for the period 1982-2005. The focus is on the differences between SST inter-annual variability patterns; either extracted through traditional PCA or the NLPCA methods.The first mode of NLPCA explains 45.5% of the total variance of SST anomaly compared to 42% explained by the first PCA. Results from previous studies that detected the Atlantic cold tongue (ACT) as the main mode are confirmed. It is observed that the maximum signal in the Gulf of Guinea (GOG) is located along coastal Angola. In agreement with composite analysis, NLPCA exhibits two types of ACT, referred to as weak and strong Atlantic cold tongues. These two events are not totally symmetrical. NLPCA thus explains the results given by both PCA and composite analysis. A particular area observed along the northern boundary between 13 and 5° W vanishes in the strong ACT case and reaches maximum extension to the west in the weak ACT case. It is also observed that the original SST data correlates well with NLPCA and PCA, but with a stronger correlation on ACT area for NLPCA and southwest in the case of PCA.

  14. Economic analysis of solar building components

    NASA Astrophysics Data System (ADS)

    Hassan, K. E.

    The problem of the economic choice of building materials and air conditioning equipment, solar and conventional, is dealt with. The formulation is particularly suitable for buildings with combined passive and positive indoor temperature control. The heat fluxes through the various components of the building are the thermal criteria used. The various costs are analyzed and separated into fixed (or capital) costs, and operating costs which constitute the total annual cost. This latter cost is the objective function to be minimized to give the optimum design. The problem is formulated in a most general dimensionless form. It is simplified by neglecting the effect on the total annual cost of the difference between the peak and average heat fluxes, which is negligible in most cases. The simplified relation is solved in closed form to optimize a single layer or two layers of a composite building component. An example of the results is presented graphically.

  15. Experimental system and component performance analysis

    SciTech Connect

    Peterman, K.

    1984-10-01

    A prototype dye laser flow loop was constructed to flow test large power amplifiers in Building 169. The flow loop is designed to operate at supply pressures up to 900 psig and flow rates up to 250 GPM. During the initial startup of the flow loop experimental measurements were made to evaluate component and system performance. Three candidate dye flow loop pumps and three different pulsation dampeners were tested.

  16. Multibody model reduction by component mode synthesis and component cost analysis

    NASA Technical Reports Server (NTRS)

    Spanos, J. T.; Mingori, D. L.

    1990-01-01

    The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.

  17. Nonlinear chaotic component extraction for postural stability analysis.

    PubMed

    Snoussi, Hichem; Hewson, David; Duchêne, Jacques

    2009-01-01

    This paper proposes a nonlinear analysis of the human postural steadiness system. The analyzed signal is the displacement of the centre of pressure (COP) collected from a force plate used for measuring postural sway. Instead of analyzing the classical nonlinear parameters on the whole signal, the proposed method consists of analyzing the nonlinear dynamics of the intrinsic mode functions (IMF) of the COP signal. Based on the computation of the IMFs Lyapunov exponents, it is shown that pre-processing the COP signal with the Empirical Mode Decomposition allows an efficient extraction of its chaotic component.

  18. Component-Based Framework for Subsurface Simulations

    SciTech Connect

    Palmer, Bruce J.; Fang, Yilin; Hammond, Glenn E.; Gurumoorthi, Vidhya

    2007-08-01

    Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL. Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow.

  19. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  20. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  1. Columbia River Component Data Gap Analysis

    SciTech Connect

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  2. Interaction Analysis of a Two-Component System Using Nanodiscs

    PubMed Central

    Hörnschemeyer, Patrick; Liss, Viktoria; Heermann, Ralf; Jung, Kirsten; Hunke, Sabine

    2016-01-01

    Two-component systems are the major means by which bacteria couple adaptation to environmental changes. All utilize a phosphorylation cascade from a histidine kinase to a response regulator, and some also employ an accessory protein. The system-wide signaling fidelity of two-component systems is based on preferential binding between the signaling proteins. However, information on the interaction kinetics between membrane embedded histidine kinase and its partner proteins is lacking. Here, we report the first analysis of the interactions between the full-length membrane-bound histidine kinase CpxA, which was reconstituted in nanodiscs, and its cognate response regulator CpxR and accessory protein CpxP. Using surface plasmon resonance spectroscopy in combination with interaction map analysis, the affinity of membrane-embedded CpxA for CpxR was quantified, and found to increase by tenfold in the presence of ATP, suggesting that a considerable portion of phosphorylated CpxR might be stably associated with CpxA in vivo. Using microscale thermophoresis, the affinity between CpxA in nanodiscs and CpxP was determined to be substantially lower than that between CpxA and CpxR. Taken together, the quantitative interaction data extend our understanding of the signal transduction mechanism used by two-component systems. PMID:26882435

  3. Clonal analysis of the epithelial component of Warthin's tumor.

    PubMed

    Honda, K; Kashima, K; Daa, T; Yokoyama, S; Nakayama, I

    2000-11-01

    The proliferation of the epithelial component of Warthin's tumor is generally considered to represent a neoplastic condition. There has been much controversy about the histogenesis of this tumor, and the clonality of the epithelial component has not been clarified. We examined the clonal status of epithelial cells of Warthin's tumor by using a polymerase chain reaction (PCR) method based on trinucleotide repeat polymorphism of the X chromosome-linked human androgen receptor gene (HUMARA) and on random inactivation of the gene by methylation. Total DNA was isolated from formalin-fixed, paraffin-embedded tissue from 16 women with Warthin's tumor. Of the 16 cases analyzed, 7 were heterozygous for the HUMARA polymorphism and informative. The epithelial components of the tumors from the 7 cases were microdissected under the light microscope, and were subjected to extraction of DNA and HUMARA analysis. Using a permanent aqueous mounting medium during microdissection, we succeeded in reducing the rate of contamination by lymphocytes in the samples to less than 10%. All 7 cases showed patterns of polyclonal proliferation in the HUMARA analysis. Our results showed the nonclonal nature of Warthin's tumor, suggesting that Warthin's tumor is a non-neoplastic tumor-like condition. HUM PATHOL 31:1377-1380. PMID:11112212

  4. Si-based RF MEMS components.

    SciTech Connect

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  5. Core Bioactive Components Promoting Blood Circulation in the Traditional Chinese Medicine Compound Xueshuantong Capsule (CXC) Based on the Relevance Analysis between Chemical HPLC Fingerprint and In Vivo Biological Effects

    PubMed Central

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  6. Core bioactive components promoting blood circulation in the traditional Chinese medicine compound xueshuantong capsule (CXC) based on the relevance analysis between chemical HPLC fingerprint and in vivo biological effects.

    PubMed

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  7. Core bioactive components promoting blood circulation in the traditional Chinese medicine compound xueshuantong capsule (CXC) based on the relevance analysis between chemical HPLC fingerprint and in vivo biological effects.

    PubMed

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  8. Elemental analysis of Egyptian phosphate fertilizer components.

    PubMed

    El-Bahi, S M; El-Dine, N Walley; El-Shershaby, A; Sroor, A

    2004-03-01

    The accumulation of certain elements in vitally important media such as water, soil, and food is undesirable from the medical point of view. It is clear that the fertilizers vary widely in their heavy metals and uranium content. A shielded high purity germanium HPGe detector has been used to measure the natural concentration of 238U, 232Th, and 40K activities in the phosphate fertilizer and its components collected from Abu-Zaabal fertilizers and chemical industries in Egypt. The concentration ranges were 134.97-681.11 Bq kg(-1), 125.23-239.26 Bq kg(-1), and 446.11-882.45 Bq kg(-1) for 238U, 232Th, and 40K, respectively. The absorbed dose rate and external hazard index were found to be from 177.14 to 445.90 nGy h(-1) and 1.03 to 2.71 nGy y(-1), respectively. The concentrations of 22 elements (Be, Na, Mg, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Zr, Mo, Cd, Ba) in the samples under investigation were determined by inductively coupled plasma optical-emission spectrometry (ICP-OES). The results for the input raw materials (rock phosphate, limestone and sulfur) and the output product as final fertilizer are presented and discussed. PMID:14982231

  9. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  10. Curvelet-based registration of multi-component seismic waves

    NASA Astrophysics Data System (ADS)

    Wang, Hairong; Cheng, Yuanfeng; Ma, Jianwei

    2014-05-01

    Registration of the travel time of PP waves and PS waves on the same coordinate is critical for joint interpretation in multi-component seismic exploration. In this paper, we propose a new curvelet-based registration method to improve the precision of registration, especially for the data with heavy random noises. By making registration in curvelet multiscale spaces from coarser to finer scale, the proposed method is not sensitive to initial values of velocity ratio of PP waves and PS waves. Applications of the new method to real seismic dataset from Shengli Oilfield, China show good registered results in terms of both qualitative and quantitative analysis, in comparison with a traditional registration method and a wavelet-based method.

  11. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  12. Balancing generality and specificity in component-based reuse

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Beck, Jon

    1992-01-01

    For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.

  13. Application of independent component analysis in face images: a survey

    NASA Astrophysics Data System (ADS)

    Huang, Yuchi; Lu, Hanqing

    2003-09-01

    Face technologies which can be applied to access control and surveillance, are essential to intelligent vision-based human computer interaction. The research efforts in this field include face detecting, face recognition, face retrieval, etc. However, these tasks are challenging because of variability in view point, lighting, pose and expression of human faces. The ideal face representation should consider the variability so as to we can develop robust algorithms for our applications. Independent Component Analysis (ICA) as an unsupervised learning technique has been used to find such a representation and obtained good performances in some applications. In the first part of this paper, we depict the models of ICA and its extensions: Independent Subspace Analysis (ISA) and Topographic ICA (TICA).Then we summaraize the process in the applications of ICA and its extension in Face images. At last we propose a promising direction for future research.

  14. Principal Component Analysis of Dynamically distinct D-Type Asteroids.

    NASA Astrophysics Data System (ADS)

    Nedic, Sanja; Ziffer, J.; Campins, H.; Fernandez, Y. R.; Walker, M.

    2008-09-01

    Principal Component Analysis (PCA), a common statistically based classification technique, has been used to classify asteroids into broad spectral categories. In some cases, a spectral superclass considered in isolation may undergo sub-classification (e.g. S-type subclasses). Since D-type asteroids populate at least three distinct dynamical regions in the asteroid belt -- namely Hilda, L4 Trojans and L5 Trojans, and since the recently-developed "Nice” model (Morbidelli et al. 2005. Nature 435, 462; Levison et al. 2008, ACM 2008 abstract #8156) hypothesizes that these regions may share a common origin, examining the appropriateness of a D-type sub-classification scheme is warranted. Toward this end, we performed PCA on the D-type L4, L5, and Hilda asteroids. Our PCA was based on the Sloan Digital Sky Survey broadband colors (u - g, g - r, r - i, and i - z) of 31 L4, 24 L5, and 32 Hilda asteroids with radii ranging from approximately 5 to 45 km. PCA showed 90.2% of the variance in the spectra could be condensed into the first two principal components, PC1 and PC2, with the first and second component accounting for 50.7% and 39.4% respectively. No significant clustering is observed on a PC1 vs. PC2 plot suggesting the D-type L4, L5, and Hilda asteroids do not form three independent groups, but rather are spectrally indistinguishable. We performed several statistical analyses of the means and variances of the principal components to test the validity of this conclusion. No statistically significant difference in the means among the three groups was found, nor was there any such difference in the variances, although the statistic comparing the L4 Trojans and Hildas was close to the critical value. Further measurements of colors of both large and small Trojans and Hildas will let us continue to investigate the spectral diversity of these objects.

  15. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  16. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  17. Component analysis of respirator user training.

    PubMed

    Harber, Philip; Boumis, Robert J; Su, Jing; Barrett, Sarah; Alongi, Gabriela

    2013-01-01

    Respirators must be properly used to be effective. In an experimental protocol, 145 subjects were trained and then observed donning and doffing respirators. Filtering facepiece and dual cartridge half face mask types were studied. Subjects were then tested for knowledge and for proper performance using video recording analysis. Knowledge tests showed adequate learning, but performance was often poor. Inspection, strap tension (half mask), seal checking, and avoiding mask contact during doffing were particularly problematic. Mask positioning was generally well done. Correlation between knowledge and performance for specific items was generally poor, although there was a weak correlation between overall knowledge and overall performance (rho = 0.32) for the half mask users. Actual unprompted performance as well as knowledge and fit-testing should be assessed for user certification. Respirator design approval should consider users' ability to learn proper technique. PMID:24011265

  18. Component analysis of the protein hydration entropy

    NASA Astrophysics Data System (ADS)

    Chong, Song-Ho; Ham, Sihyun

    2012-05-01

    We report the development of an atomic decomposition method of the protein solvation entropy in water, which allows us to understand global change in the solvation entropy in terms of local changes in protein conformation as well as in hydration structure. This method can be implemented via a combined approach based on molecular dynamics simulation and integral-equation theory of liquids. An illustrative application is made to 42-residue amyloid-beta protein in water. We demonstrate how this method enables one to elucidate the molecular origin for the hydration entropy change upon conformational transitions of protein.

  19. Advances in resonance based NDT for ceramic components

    NASA Astrophysics Data System (ADS)

    Hunter, L. J.; Jauriqui, L. M.; Gatewood, G. D.; Sisneros, R.

    2012-05-01

    The application of resonance based non-destructive testing methods has been providing benefit to manufacturers of metal components in the automotive and aerospace industries for many years. Recent developments in resonance based technologies are now allowing the application of resonance NDT to ceramic components including turbine engine components, armor, and hybrid bearing rolling elements. Application of higher frequencies and advanced signal interpretation are now allowing Process Compensated Resonance Testing to detect both internal material defects and surface breaking cracks in a variety of ceramic components. Resonance techniques can also be applied to determine material properties of coupons and to evaluate process capability for new manufacturing methods.

  20. Analysis of nuclear power plant component failures

    SciTech Connect

    Not Available

    1984-01-01

    Items are shown that have caused 90% of the nuclear unit outages and/or deratings between 1971 and 1980 and the magnitude of the problem indicated by an estimate of power replacement cost when the units are out of service or derated. The funding EPRI has provided on these specific items for R and D and technology transfer in the past and the funding planned in the future (1982 to 1986) are shown. EPRI's R and D may help the utilities on only a small part of their nuclear unit outage problems. For example, refueling is the major cause for nuclear unit outages or deratings and the steam turbine is the second major cause for nuclear unit outages; however, these two items have been ranked fairly low on the EPRI priority list for R and D funding. Other items such as nuclear safety (NRC requirements), reactor general, reactor and safety valves and piping, and reactor fuel appear to be receiving more priority than is necessary as determined by analysis of nuclear unit outage causes.

  1. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  2. [Royal jelly: component efficiency, analysis, and standardisation].

    PubMed

    Oršolić, Nada

    2013-09-01

    Royal jelly is a viscous substance secreted by the hypopharyngeal and mandibular glands of worker honeybees (Apis mellifera) that contains a considerable amount of proteins, free amino acids, lipids, vitamins, sugars, and bioactive substances such as 10-hydroxy-trans-2-decenoic acid, antibacterial protein, and 350-kDa protein. These properties make it an attractive ingredient in various types of healthy foods. This article brings a brief review of the molecular mechanisms involved in the development of certain disorders that can be remedied by royal jelly, based on a selection of in vivo and in vitro studies. It also describes current understanding of the mechanisms and beneficial effects by which royal jelly helps to combat aging-related complications. Royal jelly has been reported to exhibit beneficial physiological and pharmacological effects in mammals, including vasodilative and hypotensive activities, antihypercholesterolemic activity, and antitumor activity. As its composition varies significantly (for both fresh and dehydrated samples), the article brings a few recommendations for defining new quality standards.

  3. Identifying signatures of sexual selection using genomewide selection components analysis

    PubMed Central

    Flanagan, Sarah P; Jones, Adam G

    2015-01-01

    Sexual selection must affect the genome for it to have an evolutionary impact, yet signatures of selection remain elusive. Here we use an individual-based model to investigate the utility of genome-wide selection components analysis, which compares allele frequencies of individuals at different life history stages within a single population to detect selection without requiring a priori knowledge of traits under selection. We modeled a diploid, sexually reproducing population and introduced strong mate choice on a quantitative trait to simulate sexual selection. Genome-wide allele frequencies in adults and offspring were compared using weighted FST values. The average number of outlier peaks (i.e., those with significantly large FST values) with a quantitative trait locus in close proximity (“real” peaks) represented correct diagnoses of loci under selection, whereas peaks above the FST significance threshold without a quantitative trait locus reflected spurious peaks. We found that, even with moderate sample sizes, signatures of strong sexual selection were detectable, but larger sample sizes improved detection rates. The model was better able to detect selection with more neutral markers, and when quantitative trait loci and neutral markers were distributed across multiple chromosomes. Although environmental variation decreased detection rates, the identification of real peaks nevertheless remained feasible. We also found that detection rates can be improved by sampling multiple populations experiencing similar selection regimes. In short, genome-wide selection components analysis is a challenging but feasible approach for the identification of regions of the genome under selection. PMID:26257884

  4. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  5. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  6. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result.

  7. Key components of financial-analysis education for clinical nurses.

    PubMed

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses.

  8. Three component microseism analysis in Australia from deconvolution enhanced beamforming

    NASA Astrophysics Data System (ADS)

    Gal, Martin; Reading, Anya; Ellingsen, Simon; Koper, Keith; Burlacu, Relu; Tkalčić, Hrvoje; Gibbons, Steven

    2016-04-01

    Ocean induced microseisms in the range 2-10 seconds are generated in deep oceans and near coastal regions as body and surface waves. The generation of these waves can take place over an extended area and in a variety of geographical locations at the same time. It is therefore common to observe multiple arrivals with a variety of slowness vectors which leads to the desire to measure multiple arrivals accurately. We present a deconvolution enhanced direction of arrival algorithm, for single and 3 component arrays, based on CLEAN. The algorithm iteratively removes sidelobe contributions in the power spectrum, therefore improves the signal-to-noise ratio of weaker sources. The power level on each component (vertical, radial and transverse) can be accurately estimated as the beamformer decomposes the power spectrum into point sources. We first apply the CLEAN aided beamformer to synthetic data to show its performance under known conditions and then evaluate real (observed) data from a range of arrays with apertures between 10 and 70 km (ASAR, WRA and NORSAR) to showcase the improvement in resolution. We further give a detailed analysis of the 3 component wavefield in Australia including source locations, power levels, phase ratios, etc. by two spiral arrays (PSAR and SQspa). For PSAR the analysis is carried out in the frequency range 0.35-1Hz. We find LQ, Lg and fundamental and higher mode Rg wave phases. Additionally, we also observe the Sn phase. This is the first time this has been achieved through beamforming on microseism noise and underlines the potential for extra seismological information that can be extracted using the new implementation of CLEAN. The fundamental mode Rg waves are dominant in power for low frequencies and show equal power levels with LQ towards higher frequencies. Generation locations between Rg and LQ are mildly correlated for low frequencies and uncorrelated for higher frequencies. Results from SQspa will discuss lower frequencies around the

  9. Application of principal component analysis in grouping geomorphic parameters of a watershed for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Sharma, S. K.; Gajbhiye, S.; Tignath, S.

    2015-03-01

    Principal component analysis has been applied to 13 dimensionless geomorphic parameters on 8 sub-watersheds of Kanhiya Nala watershed tributary of Tons River located in Part of Panna and Satna district of Madhya Pradesh, India, to group the parameters under different components based on significant correlations. Results of principal component analysis of 13 geomorphic parameters clearly reveal that some of these parameters are strongly correlated with the components but texture ratio and hypsometric integral do not show correlation with any of the component. So they have been screened out of analysis. The principal component loading matrix obtained using correlation matrix of eleven parameters reveals that first three components together account for 93.71 % of the total explained variance. Therefore, principal component loading is applied to get better correlation and clearly group the parameters in physically significant components. Based on the properties of the geomorphic parameters, three principal components were defined as drainage, slope or steepness and shape components. One parameter each from the significant components may form a set of independent parameters at a time in modeling the hydrologic responses such as runoff and sediment yield from small watersheds.

  10. Reliability-based robust design optimization of vehicle components, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based design optimization, the reliability sensitivity analysis and robust design method are employed to present a practical and effective approach for reliability-based robust design optimization of vehicle components. A procedure for reliability-based robust design optimization of vehicle components is proposed. Application of the method is illustrated by reliability-based robust design optimization of axle and spring. Numerical results have shown that the proposed method can be trusted to perform reliability-based robust design optimization of vehicle components.

  11. Reliability-based robust design optimization of vehicle components, Part II: Case studies

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based optimization, the reliability- based sensitivity analysis and robust design method are employed to propose an effective approach for reliability-based robust design optimization of vehicle components in Part I. Applications of the method are further discussed for reliability-based robust optimization of vehicle components in this paper. Examples of axles, torsion bar, coil and composite springs are illustrated for numerical investigations. Results have shown the proposed method is an efficient method for reliability-based robust design optimization of vehicle components.

  12. Quantitative Analysis of Porosity and Transport Properties by FIB-SEM 3D Imaging of a Solder Based Sintered Silver for a New Microelectronic Component

    NASA Astrophysics Data System (ADS)

    Rmili, W.; Vivet, N.; Chupin, S.; Le Bihan, T.; Le Quilliec, G.; Richard, C.

    2016-04-01

    As part of development of a new assembly technology to achieve bonding for an innovative silicon carbide (SiC) power device used in harsh environments, the aim of this study is to compare two silver sintering profiles and then to define the best candidate for die attach material for this new component. To achieve this goal, the solder joints have been characterized in terms of porosity by determination of the morphological characteristics of the material heterogeneities and estimating their thermal and electrical transport properties. The three dimensional (3D) microstructure of sintered silver samples has been reconstructed using a focused ion beam scanning electron microscope (FIB-SEM) tomography technique. The sample preparation and the experimental milling and imaging parameters have been optimized in order to obtain a high quality of 3D reconstruction. Volume fractions and volumetric connectivity of the individual phases (silver and voids) have been determined. Effective thermal and electrical conductivities of the samples and the tortuosity of the silver phase have been also evaluated by solving the diffusive transport equation.

  13. Effect of Preprocessing for Result of Independent component analysis.

    PubMed

    Zhang, Yun

    2005-01-01

    In recent years, scientists, doctors in the field of biomedical engineering and researchers of the correlated fields have been concentrating on study of activities of bioelectricity of different cortex fields of human brain on the condition of different evocable and cognitive stimulations, and try to test human psychology and physiology, and control exterior environment. Independent component analysis is a tool that can help people to distinguish and understand various EEG signals. To the signal that we know very little, we can get very good explanation by using independent component analysis. In this paper a new algorithm is introduced that is adapted to the preprocessing of data that is dealt with by independent component analysis. This algorithm can not only accelerates the decomposing speed of independent component, but also can get the higher amplitude of extraction of steady-state visual evoked potentials.

  14. Using Dynamic Master Logic Diagram for component partial failure analysis

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    A methodology using the Dynamic Master Logic Diagram (DMLD) for the evaluation of component partial failure is presented. Since past PRAs have not focused on partial failure effects, the reliability of components are only based on the binary state assumption, i.e. defining a component as fully failed or functioning. This paper is to develop an approach to predict and estimate the component partial failure on the basis of the fuzzy state assumption. One example of the application of this methodology with the reliability function diagram of a centrifugal pump is presented.

  15. Acceleration of dynamic fluorescence molecular tomography with principal component analysis

    PubMed Central

    Zhang, Guanglei; He, Wei; Pu, Huangsheng; Liu, Fei; Chen, Maomao; Bai, Jing; Luo, Jianwen

    2015-01-01

    Dynamic fluorescence molecular tomography (FMT) is an attractive imaging technique for three-dimensionally resolving the metabolic process of fluorescent biomarkers in small animal. When combined with compartmental modeling, dynamic FMT can be used to obtain parametric images which can provide quantitative pharmacokinetic information for drug development and metabolic research. However, the computational burden of dynamic FMT is extremely huge due to its large data sets arising from the long measurement process and the densely sampling device. In this work, we propose to accelerate the reconstruction process of dynamic FMT based on principal component analysis (PCA). Taking advantage of the compression property of PCA, the dimension of the sub weight matrix used for solving the inverse problem is reduced by retaining only a few principal components which can retain most of the effective information of the sub weight matrix. Therefore, the reconstruction process of dynamic FMT can be accelerated by solving the smaller scale inverse problem. Numerical simulation and mouse experiment are performed to validate the performance of the proposed method. Results show that the proposed method can greatly accelerate the reconstruction of parametric images in dynamic FMT almost without degradation in image quality. PMID:26114027

  16. Independent component analysis for unmixing multi-wavelength photoacoustic images

    NASA Astrophysics Data System (ADS)

    An, Lu; Cox, Ben

    2016-03-01

    Independent component analysis (ICA) is a blind source unmixing method that may be used under certain circumstances to decompose multi-wavelength photoacoustic (PA) images into separate components representing individual chromophores. It has the advantages of being fast, easy to implement and computationally inexpensive. This study uses simulated multi-wavelength PA images to investigate the conditions required for ICA to be an accurate unmixing method and compares its performance to linear inversion. An approximate fluence adjustment based on spatially homogeneous optical properties equal to that of the background region was applied to the PA images before unmixing with ICA or LI. ICA is shown to provide accurate separation of the chromophores in cases where the absorption coefficients are lower than certain thresholds, some of which are comparable to physiologically relevant values. However, the results also show that the performance of ICA abruptly deteriorates when the absorption is increased beyond these thresholds. In addition, the accuracy of ICA decreases in the presence of spatially inhomogeneous absorption in the background.

  17. Principal Component Analysis for pattern recognition in volcano seismic spectra

    NASA Astrophysics Data System (ADS)

    Unglert, Katharina; Jellinek, A. Mark

    2016-04-01

    Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.

  18. Principal Component Analysis of Spectroscopic Imaging Data in Scanning Probe Microscopy

    SciTech Connect

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    The approach for data analysis in band excitation family of scanning probe microscopies based on principal component analysis (PCA) is explored. PCA utilizes the similarity between spectra within the image to select the relevant response components. For small signal variations within the image, the PCA components coincide with the results of deconvolution using simple harmonic oscillator model. For strong signal variations, the PCA allows effective approach to rapidly process, de-noise and compress the data. The extension of PCA for correlation function analysis is demonstrated. The prospects of PCA as a universal tool for data analysis and representation in multidimensional SPMs are discussed.

  19. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    SciTech Connect

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J.; Deroo, P.

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  20. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    ERIC Educational Resources Information Center

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  1. A component based software framework for vision measurement

    NASA Astrophysics Data System (ADS)

    He, Lingsong; Bei, Lei

    2011-12-01

    In vision measurement applications, it is usually used to achieve an optimal result by combing different processing steps and algorithms .This paper proposes a component based software framework for vision measurement. First, commonly used processing algorithms of vision measurement are encapsulated into components that contained in a components library. The component which is designed to have its own properties also provides I/O interfaces for extern calls. Second, a software bus is proposed which can plug components and assemble them to form a vision measurement application. Besides components managing and data line linking, the software bus also provides service of message distribution, which is used to drive all the plugged components working properly. Third, a XML based script language is proposed to record the plugging and assembling process of a vision measurement application, which can be used to rebuild the vision measurement application later. At last, based on this framework, an application of landmark extraction that applied in camera calibration is introduced to show how it works.

  2. A principal component analysis of transmission spectra of wine distillates

    NASA Astrophysics Data System (ADS)

    Rogovaya, M. V.; Sinitsyn, G. V.; Khodasevich, M. A.

    2014-11-01

    A chemometric method of decomposing multidimensional data into a small-sized space, the principal component method, has been applied to the transmission spectra of vintage Moldovan wine distillates. A sample of 42 distillates aged from four to 7 years from six producers has been used to show the possibility of identifying a producer in a two-dimensional space of principal components describing 94.5% of the data-matrix dispersion. Analysis of the loads into the first two principal components has shown that, in order to measure the optical characteristics of the samples under study using only two wavelengths, it is necessary to select 380 and 540 nm, instead of the standard 420 and 520 nm, to describe the variability of the distillates by one principal component or 370 and 520 nm to describe the variability by two principal components.

  3. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  4. Using Independent Component Analysis to Separate Signals in Climate Data

    SciTech Connect

    Fodor, I K; Kamath, C

    2003-01-28

    Global temperature series have contributions from different sources, such as volcanic eruptions and El Nino Southern Oscillation variations. We investigate independent component analysis as a technique to separate unrelated sources present in such series. We first use artificial data, with known independent components, to study the conditions under which ICA can separate the individual sources. We then illustrate the method with climate data from the National Centers for Environmental Prediction.

  5. A Multiresolution Independent Component Analysis for textile images

    NASA Astrophysics Data System (ADS)

    Coltuc, D.; Fournel, T.; Becker, J. M.; Jourlin, M.

    2007-07-01

    This paper aims to provide an efficient tool for pattern recognition in the fight against counterfeiting in textile design. As fabrics patterns to be protected can present numerous and various characteristics related to intensity or color feature but also to texture and relative scales features, we introduce a tool able to separate image independent components at different resolutions. The suggested `Multiresolution ICA' combines the properties from both wavelet transform and Independent Component Analysis.

  6. Component-based approach to robot vision for computational efficiency

    NASA Astrophysics Data System (ADS)

    Lee, Junhee; Kim, Dongsun; Park, Yeonchool; Park, Sooyong; Lee, Sukhan

    2007-12-01

    The purpose of this paper is to show merit and feasibility of the component based approach in robot system integration. Many methodologies such as 'component based approach, 'middle ware based approach' are suggested to integrate various complex functions on robot system efficiently. However, these methodologies are not used to robot function development broadly, because these 'Top-down' methodologies are modeled and researched in software engineering field, which are different from robot function researches, so that cannot be trusted by function developers. Developers' the main concern of these methodologies is the performance decreasing, which origins from overhead of a framework. This paper overcomes this misunderstanding by showing time performance increasing, when an experiment uses 'Self Healing, Adaptive and Growing softwarE (SHAGE)' framework, one of the component based framework. As an example of real robot function, visual object recognition is chosen to experiment.

  7. Computer compensation for NMR quantitative analysis of trace components

    SciTech Connect

    Nakayama, T.; Fujiwara, Y.

    1981-07-22

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA.

  8. Bonding and Integration Technologies for Silicon Carbide Based Injector Components

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.; Singh, Mrityunjay

    2008-01-01

    Advanced ceramic bonding and integration technologies play a critical role in the fabrication and application of silicon carbide based components for a number of aerospace and ground based applications. One such application is a lean direct injector for a turbine engine to achieve low NOx emissions. Ceramic to ceramic diffusion bonding and ceramic to metal brazing technologies are being developed for this injector application. For the diffusion bonding, titanium interlayers (PVD and foils) were used to aid in the joining of silicon carbide (SiC) substrates. The influence of such variables as surface finish, interlayer thickness (10, 20, and 50 microns), processing time and temperature, and cooling rates were investigated. Microprobe analysis was used to identify the phases in the bonded region. For bonds that were not fully reacted an intermediate phase, Ti5Si3Cx, formed that is thermally incompatible in its thermal expansion and caused thermal stresses and cracking during the processing cool-down. Thinner titanium interlayers and/or longer processing times resulted in stable and compatible phases that did not contribute to microcracking and resulted in an optimized microstructure. Tensile tests on the joined materials resulted in strengths of 13-28 MPa depending on the SiC substrate material. Non-destructive evaluation using ultrasonic immersion showed well formed bonds. For the joining technology of brazing Kovar fuel tubes to silicon carbide, preliminary development of the joining approach has begun. Various technical issues and requirements for the injector application are addressed.

  9. Critical Components of Effective School-Based Feeding Improvement Programs

    ERIC Educational Resources Information Center

    Bailey, Rita L.; Angell, Maureen E.

    2004-01-01

    This article identifies critical components of effective school-based feeding improvement programs for students with feeding problems. A distinction is made between typical school-based feeding management and feeding improvement programs, where feeding, independent functioning, and mealtime behaviors are the focus of therapeutic strategies.…

  10. Parallel PDE-Based Simulations Using the Common Component Architecture

    SciTech Connect

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-03-05

    Summary. The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of componentbased software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and generalpurpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications.

  11. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  12. Perturbational formulation of principal component analysis in molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Koyama, Yohei M.; Kobayashi, Tetsuya J.; Tomoda, Shuji; Ueda, Hiroki R.

    2008-10-01

    Conformational fluctuations of a molecule are important to its function since such intrinsic fluctuations enable the molecule to respond to the external environmental perturbations. For extracting large conformational fluctuations, which predict the primary conformational change by the perturbation, principal component analysis (PCA) has been used in molecular dynamics simulations. However, several versions of PCA, such as Cartesian coordinate PCA and dihedral angle PCA (dPCA), are limited to use with molecules with a single dominant state or proteins where the dihedral angle represents an important internal coordinate. Other PCAs with general applicability, such as the PCA using pairwise atomic distances, do not represent the physical meaning clearly. Therefore, a formulation that provides general applicability and clearly represents the physical meaning is yet to be developed. For developing such a formulation, we consider the conformational distribution change by the perturbation with arbitrary linearly independent perturbation functions. Within the second order approximation of the Kullback-Leibler divergence by the perturbation, the PCA can be naturally interpreted as a method for (1) decomposing a given perturbation into perturbations that independently contribute to the conformational distribution change or (2) successively finding the perturbation that induces the largest conformational distribution change. In this perturbational formulation of PCA, (i) the eigenvalue measures the Kullback-Leibler divergence from the unperturbed to perturbed distributions, (ii) the eigenvector identifies the combination of the perturbation functions, and (iii) the principal component determines the probability change induced by the perturbation. Based on this formulation, we propose a PCA using potential energy terms, and we designate it as potential energy PCA (PEPCA). The PEPCA provides both general applicability and clear physical meaning. For demonstrating its power, we

  13. Abstract Interfaces for Data Analysis - Component Architecture for Data Analysis Tools

    SciTech Connect

    Barrand, Guy

    2002-08-20

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualization), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organization, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimizing re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. This paper gives an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  14. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots

    NASA Astrophysics Data System (ADS)

    Pereira, Jorge Costa; Azevedo, Julio Cesar R.; Knapik, Heloise G.; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  15. The National Cancer Institute Food Component Research Data Base

    PubMed Central

    Herold, Pauline M.; Brooks, Emily M.; Roque, Julio; Marciniak, Thomas A.; Butrum, Ritva R.; Meagher, Kevin

    1988-01-01

    The Food Component Research Data Base (FCRDB) is a software package for IBM PC compatible computers that allows a nutritional researcher to perform complex retrievals on a food component data base (currently the USDA Handbook 8 data) using a structured food description language, the FDA/NCI Factored Food Vocabulary (FFV). The FCRDB software is written in the “C” language and uses a windowing interface for ease of use and bitmaps to achieve excellent response times for complex retrievals on a personal computer.

  16. Principal Component Analysis for Enhancement of Infrared Spectra Monitoring

    NASA Astrophysics Data System (ADS)

    Haney, Ricky Lance

    The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air

  17. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  18. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  19. Reliability and Creep/Fatigue Analysis of a CMC Component

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.

    2007-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.

  20. Application of independent component analysis to Fermilab Booster

    SciTech Connect

    Huang, X.B.; Lee, S.Y.; Prebys, E.; Tomlin, R.; /Indiana U. /Fermilab

    2005-01-01

    Autocorrelation is applied to analyze sets of finite-sampling data such as the turn-by-turn beam position monitor (BPM) data in an accelerator. This method of data analysis, called the independent component analysis (ICA), is shown to be a powerful beam diagnosis tool for being able to decompose sampled signals into its underlying source signals. They find that the ICA has an advantage over the principle component analysis (PCA) used in the model-independent analysis (MIA) in isolating independent modes. The tolerance of the ICA method to noise in the BPM system is systematically studied. The ICA is applied to analyze the complicated beam motion in a rapid-cycling booster synchrotron at the Fermilab. Difficulties and limitations of the ICA method are also discussed.

  1. Spatially Weighted Principal Component Analysis for Imaging Classification

    PubMed Central

    Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu

    2014-01-01

    The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629

  2. Partial Component Analysis of a Comprehensive Smoking Program.

    ERIC Educational Resources Information Center

    Horan, John J.; Hackett, Gail

    The effects of a comprehensive program for the treatment of cigarette addiction were investigated. Subjects were 18 university students and 12 community members. Abstinence levels of 40 percent, verified by expired air carbon monoxide tests, were achieved in a six to nine month follow-up period. A partial component analysis revealed that the…

  3. Analysis of first-pass myocardial perfusion MRI using independent component analysis

    NASA Astrophysics Data System (ADS)

    Milles, Julien; van der Geest, Rob J.; Jerosch-Herold, Michael; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.

    2006-03-01

    Myocardial perfusion MRI has emerged as a suitable imaging technique for the detection of ischemic regions of the heart. However, manual post-processing is labor intensive, seriously hampering its daily clinical use. We propose a novel, data driven analysis method based on Independent Component Analysis (ICA). By performing ICA on the complete perfusion sequence, physiologically meaningful feature images, representing events occurring during the perfusion sequence, can be factored out. Results obtained using our method are compared with results obtained using manual contouring by a medical expert. The estimated weight functions are correlated against the perfusion time-intensity curves from manual contours, yielding promising results.

  4. Time-frequency component analysis of somatosensory evoked potentials in rats

    PubMed Central

    Zhang, Zhi-Guo; Yang, Jun-Lin; Chan, Shing-Chow; Luk, Keith Dip-Kei; Hu, Yong

    2009-01-01

    Background Somatosensory evoked potential (SEP) signal usually contains a set of detailed temporal components measured and identified in a time domain, giving meaningful information on physiological mechanisms of the nervous system. The purpose of this study is to measure and identify detailed time-frequency components in normal SEP using time-frequency analysis (TFA) methods and to obtain their distribution pattern in the time-frequency domain. Methods This paper proposes to apply a high-resolution time-frequency analysis algorithm, the matching pursuit (MP), to extract detailed time-frequency components of SEP signals. The MP algorithm decomposes a SEP signal into a number of elementary time-frequency components and provides a time-frequency parameter description of the components. A clustering by estimation of the probability density function in parameter space is followed to identify stable SEP time-frequency components. Results Experimental results on cortical SEP signals of 28 mature rats show that a series of stable SEP time-frequency components can be identified using the MP decomposition algorithm. Based on the statistical properties of the component parameters, an approximated distribution of these components in time-frequency domain is suggested to describe the complex SEP response. Conclusion This study shows that there is a set of stable and minute time-frequency components in SEP signals, which are revealed by the MP decomposition and clustering. These stable SEP components have specific localizations in the time-frequency domain. PMID:19203394

  5. Model-based tomographic reconstruction of objects containing known components.

    PubMed

    Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H

    2012-10-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.

  6. Model-Based Tomographic Reconstruction of Objects Containing Known Components

    PubMed Central

    Stayman, J. Webster; Otake, Yoshito; Prince, Jerry L.; Khanna, A. Jay; Siewerdsen, Jeffrey H.

    2015-01-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery. PMID:22614574

  7. Principal Components Analysis of Triaxial Vibration Data From Helicopter Transmissions

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Huff, Edward M.

    2001-01-01

    Research on the nature of the vibration data collected from helicopter transmissions during flight experiments has led to several crucial observations believed to be responsible for the high rates of false alarms and missed detections in aircraft vibration monitoring systems. This work focuses on one such finding, namely, the need to consider additional sources of information about system vibrations. In this light, helicopter transmission vibration data, collected using triaxial accelerometers, were explored in three different directions, analyzed for content, and then combined using Principal Components Analysis (PCA) to analyze changes in directionality. In this paper, the PCA transformation is applied to 176 test conditions/data sets collected from an OH58C helicopter to derive the overall experiment-wide covariance matrix and its principal eigenvectors. The experiment-wide eigenvectors. are then projected onto the individual test conditions to evaluate changes and similarities in their directionality based on the various experimental factors. The paper will present the foundations of the proposed approach, addressing the question of whether experiment-wide eigenvectors accurately model the vibration modes in individual test conditions. The results will further determine the value of using directionality and triaxial accelerometers for vibration monitoring and anomaly detection.

  8. Major component analysis of dynamic networks of physiologic organ interactions

    NASA Astrophysics Data System (ADS)

    Liu, Kang K. L.; Bartsch, Ronny P.; Ma, Qianli D. Y.; Ivanov, Plamen Ch

    2015-09-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function.

  9. Minimax mutual information approach for independent component analysis.

    PubMed

    Erdogmus, Deniz; Hild, Kenneth E; Rao, Yadunandana N; Príncipe, Joséc C

    2004-06-01

    Minimum output mutual information is regarded as a natural criterion for independent component analysis (ICA) and is used as the performance measure in many ICA algorithms. Two common approaches in information-theoretic ICA algorithms are minimum mutual information and maximum output entropy approaches. In the former approach, we substitute some form of probability density function (pdf) estimate into the mutual information expression, and in the latter we incorporate the source pdf assumption in the algorithm through the use of nonlinearities matched to the corresponding cumulative density functions (cdf). Alternative solutions to ICA use higher-order cumulant-based optimization criteria, which are related to either one of these approaches through truncated series approximations for densities. In this article, we propose a new ICA algorithm motivated by the maximum entropy principle (for estimating signal distributions). The optimality criterion is the minimum output mutual information, where the estimated pdfs are from the exponential family and are approximate solutions to a constrained entropy maximization problem. This approach yields an upper bound for the actual mutual information of the output signals - hence, the name minimax mutual information ICA algorithm. In addition, we demonstrate that for a specific selection of the constraint functions in the maximum entropy density estimation procedure, the algorithm relates strongly to ICA methods using higher-order cumulants. PMID:15130248

  10. Analysis of model Titan atmospheric components using ion mobility spectrometry

    NASA Technical Reports Server (NTRS)

    Kojiro, D. R.; Cohen, M. J.; Wernlund, R. F.; Stimac, R. M.; Humphry, D. E.; Takeuchi, N.

    1991-01-01

    The Gas Chromatograph-Ion Mobility Spectrometer (GC-IMS) was proposed as an analytical technique for the analysis of Titan's atmosphere during the Cassini Mission. The IMS is an atmospheric pressure, chemical detector that produces an identifying spectrum of each chemical species measured. When the IMS is combined with a GC as a GC-IMS, the GC is used to separate the sample into its individual components, or perhaps small groups of components. The IMS is then used to detect, quantify, and identify each sample component. Conventional IMS detection and identification of sample components depends upon a source of energetic radiation, such as beta radiation, which ionizes the atmospheric pressure host gas. This primary ionization initiates a sequence of ion-molecule reactions leading to the formation of sufficiently energetic positive or negative ions, which in turn ionize most constituents in the sample. In conventional IMS, this reaction sequence is dominated by the water cluster ion. However, many of the light hydrocarbons expected in Titan's atmosphere cannot be analyzed by IMS using this mechanism at the concentrations expected. Research at NASA Ames and PCP Inc., has demonstrated IMS analysis of expected Titan atmospheric components, including saturated aliphatic hydrocarbons, using two alternate sample ionizations mechanisms. The sensitivity of the IMS to hydrocarbons such as propane and butane was increased by several orders of magnitude. Both ultra dry (waterless) IMS sample ionization and metastable ionization were successfully used to analyze a model Titan atmospheric gas mixture.

  11. Factor analysis for isolation of the Raman spectra of aqueous sulfuric acid components

    SciTech Connect

    Malinowski, E.R.; Cox, R.A.; Haldna, U.L.

    1984-04-01

    The Raman spectra of 16 sulfuric acid/water mixtures over the entire mole fraction range were studied by various factor analysis techniques. Abstract factor analysis showed that three factors account for 98.69% of the variation in the data with a real error of 13%. Key-set factor analysis, was used to identify three spectral wavenumbers unique to each component. Spectral-isolation factor analysis, based on the key wavenumbers, revealed the spectra of each unknown component. Target factor analysis, based on the isolated spectra, yielded the relative amounts of the three spectral components. The concentration profiles obtained from the factor loadings, as well as the isolated spectra, were used to identify the chemical species.

  12. Performance-based seismic design of nonstructural building components: The next frontier of earthquake engineering

    NASA Astrophysics Data System (ADS)

    Filiatrault, Andre; Sullivan, Timothy

    2014-08-01

    With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major

  13. Guide for Hydrogen Hazards Analysis on Components and Systems

    NASA Technical Reports Server (NTRS)

    Beeson, Harold; Woods, Stephen

    2003-01-01

    The physical and combustion properties of hydrogen give rise to hazards that must be considered when designing and operating a hydrogen system. One of the major concerns in the use of hydrogen is that of fire or detonation because of hydrogen's wide flammability range, low ignition energy, and flame speed. Other concerns include the contact and interaction of hydrogen with materials, such as the hydrogen embrittlement of materials and the formation of hydrogen hydrides. The low temperature of liquid and slush hydrogen bring other concerns related to material compatibility and pressure control; this is especially important when dissimilar, adjoining materials are involved. The potential hazards arising from these properties and design features necessitate a proper hydrogen hazards analysis before introducing a material, component, or system into hydrogen service. The objective of this guide is to describe the NASA Johnson Space Center White Sands Test Facility hydrogen hazards analysis method that should be performed before hydrogen is used in components and/or systems. The method is consistent with standard practices for analyzing hazards. It is recommended that this analysis be made before implementing a hydrogen component qualification procedure. A hydrogen hazards analysis is a useful tool for hydrogen-system designers, system and safety engineers, and facility managers. A hydrogen hazards analysis can identify problem areas before hydrogen is introduced into a system-preventing damage to hardware, delay or loss of mission or objective, and possible injury or loss of life.

  14. Analysis of exposure due to work on activated components

    SciTech Connect

    Cossairt, J.D.

    1987-09-01

    In this brief note the author summarized analysis of the exposure incurred in various maintenance jobs involving activated accelerator and beam line components at Fermilab. A tabulation was made of parameters associated with each job. Included are rather terse descriptions of the various tasks. The author presented various plots of the quantities in the table. All exposure rates are mR/hr while all exposures accumulated are mR. The exposure rates were generally measured at the Fermilab standard one foot distance from the activated component. Accumulated exposures are taken from the self-reading pocket dosimeter records maintained by the radiation control technicians.

  15. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  16. Principal components analysis of Mars in the near-infrared

    NASA Astrophysics Data System (ADS)

    Klassen, David R.

    2009-11-01

    Principal components analysis and target transformation are applied to near-infrared image cubes of Mars in a study to disentangle the spectra into a small number of spectral endmembers and characterize the spectral information. The image cubes are ground-based telescopic data from the NASA Infrared Telescope Facility during the 1995 and 1999 near-aphelion oppositions when ice clouds were plentiful [ Clancy, R.T., Grossman, A.W., Wolff, M.J., James, P.B., Rudy, D.J., Billawala, Y.N., Sandor, B.J., Lee, S.W., Muhleman, D.O., 1996. Icarus 122, 36-62; Wolff, M.J., Clancy, R.T., Whitney, B.A., Christensen, P.R., Pearl, J.C., 1999b. In: The Fifth International Conference on Mars, July 19-24, 1999, Pasadena, CA, pp. 6173], and the 2003 near-perihelion opposition when ice clouds are generally limited to topographically high regions (volcano cap clouds) but airborne dust is more common [ Martin, L.J., Zurek, R.W., 1993. J. Geophys. Res. 98 (E2), 3221-3246]. The heart of the technique is to transform the data into a vector space along the dimensions of greatest spectral variance and then choose endmembers based on these new "trait" dimensions. This is done through a target transformation technique, comparing linear combinations of the principal components to a mineral spectral library. In general Mars can be modeled, on the whole, with only three spectral endmembers which account for almost 99% of the data variance. This is similar to results in the thermal infrared with Mars Global Surveyor Thermal Emission Spectrometer data [Bandfield, J.L., Hamilton, V.E., Christensen, P.R., 2000. Science 287, 1626-1630]. The globally recovered surface endmembers can be used as inputs to radiative transfer modeling in order to measure ice abundance in martian clouds [Klassen, D.R., Bell III, J.F., 2002. Bull. Am. Astron. Soc. 34, 865] and a preliminary test of this technique is also presented.

  17. Component-based software for high-performance scientific computing

    NASA Astrophysics Data System (ADS)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  18. Learning multiview face subspaces and facial pose estimation using independent component analysis.

    PubMed

    Li, Stan Z; Lu, XiaoGuang; Hou, Xinwen; Peng, Xianhua; Cheng, Qiansheng

    2005-06-01

    An independent component analysis (ICA) based approach is presented for learning view-specific subspace representations of the face object from multiview face examples. ICA, its variants, namely independent subspace analysis (ISA) and topographic independent component analysis (TICA), take into account higher order statistics needed for object view characterization. In contrast, principal component analysis (PCA), which de-correlates the second order moments, can hardly reveal good features for characterizing different views, when the training data comprises a mixture of multiview examples and the learning is done in an unsupervised way with view-unlabeled data. We demonstrate that ICA, TICA, and ISA are able to learn view-specific basis components unsupervisedly from the mixture data. We investigate results learned by ISA in an unsupervised way closely and reveal some surprising findings and thereby explain underlying reasons for the emergent formation of view subspaces. Extensive experimental results are presented.

  19. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  20. Principal Component Analysis of Terrestrial and Venusian Topography

    NASA Astrophysics Data System (ADS)

    Stoddard, P. R.; Jurdy, D. M.

    2015-12-01

    We use Principal Component Analysis (PCA) as an objective tool in analyzing, comparing, and contrasting topographic profiles of different/similar features from different locations and planets. To do so, we take average profiles of a set of features and form a cross-correlation matrix, which is then diagonalized to determine its principal components. These components, not merely numbers, represent actual profile shapes that give a quantitative basis for comparing different sets of features. For example, PCA for terrestrial hotspots shows the main component as a generic dome shape. Secondary components show a more sinusoidal shape, related to the lithospheric loading response, and thus give information about the nature of the lithosphere setting of the various hotspots. We examine a range of terrestrial spreading centers: fast, slow, ultra-slow, incipient, and extinct, and compare these to several chasmata on Venus (including Devana, Ganis, Juno, Parga, and Kuanja). For upwelling regions, we consider the oceanic Hawaii, Reunion, and Iceland hotspots and Yellowstone, a prototypical continental hotspot. Venus has approximately one dozen broad topographic and geoid highs called regiones. Our analysis includes Atla, Beta, and W. Eistla regiones. Atla and Beta are widely thought to be the most likely to be currently or recently active. Analysis of terrestrial rifts suggests shows increasing uniformity of shape among rifts with increasing spreading rates. Venus' correlations of uniformity rank considerably lower than the terrestrial ones. Extrapolating the correlation/spreading rate suggests that Venus' chasmata, if analogous to terrestrial spreading centers, most resemble the ultra-slow spreading level (less than 12mm/yr) of the Arctic Gakkel ridge. PCA will provide an objective measurement of this correlation.

  1. Estimation of Baroreflex Function Using Independent Component Analysis of Photoplethysmography

    NASA Astrophysics Data System (ADS)

    Abe, Makoto; Yoshizawa, Makoto; Sugita, Norihiro; Tanaka, Akira; Homma, Noriyasu; Yambe, Tomoyuki; Nitta, Shin-Ichi

    The maximum cross-correlation coefficient ρmax between blood pressure variability and heart rate variability, whose frequency components are limited to the Mayer wave-related band, is a useful index to evaluate the state of the autonomic nervous function related to baroreflex. However, measurement of continuous blood pressure with an expensive and bulky measuring device is required to calculate ρmax. The present study has proposed an easier method for obtaining ρmax with measurement of finger photoplethysmography (PPG) only. In the proposed method, independent components are extracted from feature variables specifying the PPG signal by using the independent component analysis (ICA), and then the most appropriate component is chosen out of them so that the ρmax calculated from the component can approximate its true value. The results from the experiment with a postural change performed in 18 healthy subjects have suggested that the proposed method is available for estimating ρmax by using the ICA to extract blood pressure information from the PPG signal.

  2. Principal Component Analysis of Arctic Solar Irradiance Spectra

    NASA Technical Reports Server (NTRS)

    Rabbette, Maura; Pilewskie, Peter; Gore, Warren J. (Technical Monitor)

    2000-01-01

    During the FIRE (First ISCPP Regional Experiment) Arctic Cloud Experiment and coincident SHEBA (Surface Heat Budget of the Arctic Ocean) campaign, detailed moderate resolution solar spectral measurements were made to study the radiative energy budget of the coupled Arctic Ocean - Atmosphere system. The NASA Ames Solar Spectral Flux Radiometers (SSFRs) were deployed on the NASA ER-2 and at the SHEBA ice camp. Using the SSFRs we acquired continuous solar spectral irradiance (380-2200 nm) throughout the atmospheric column. Principal Component Analysis (PCA) was used to characterize the several tens of thousands of retrieved SSFR spectra and to determine the number of independent pieces of information that exist in the visible to near-infrared solar irradiance spectra. It was found in both the upwelling and downwelling cases that almost 100% of the spectral information (irradiance retrieved from 1820 wavelength channels) was contained in the first six extracted principal components. The majority of the variability in the Arctic downwelling solar irradiance spectra was explained by a few fundamental components including infrared absorption, scattering, water vapor and ozone. PCA analysis of the SSFR upwelling Arctic irradiance spectra successfully separated surface ice and snow reflection from overlying cloud into distinct components.

  3. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  4. Independet Component Analyses of Ground-based Exoplanetary Transits

    NASA Astrophysics Data System (ADS)

    Silva Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Waldmann, Ingo; Biddle, Lauren; Zellem, Robert Thomas; Alvarez-Candal, Alvaro

    2016-10-01

    Most observations of exoplanetary atmospheres are conducted when a "Hot Jupiter" exoplanet transits in front of its host star. These Jovian-sized planets have small orbital periods, on the order of days, and therefore a short transit time, making them more ameanable to observations. Measurements of Hot Jupiter transits must achieve a 10-4 level of accuracy in the flux to determine the spectral modulations of the exoplanetary atmosphere. In order to accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth's atmosphere, from the signal due to the exoplanet, which is several orders of magnitudes smaller. Currently, the effects of the terrestrial atmosphere and the some of the time-dependent systematic errors are treated by dividing the host star by a reference star at each wavelength and time step of the transit. More recently, Independent Component Analyses (ICA) have been used to remove systematic effects from the raw data of space-based observations (Waldmann 2014,2012; Morello et al.,2015,2016). ICA is a statistical method born from the ideas of the blind-source separation studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). One strength of this method is that it requires no additional prior knowledge of the system. Here, we present a study of the application of ICA to ground-based transit observations of extrasolar planets, which are affected by Earth's atmosphere. We analyze photometric data of two extrasolar planets, WASP-1b and GJ3470b, recorded by the 61" Kuiper Telescope at Stewart Observatory using the Harris B and U filters. The presentation will compare the light curve depths and their dispersions as derived from the ICA analysis to those derived by analyses that ratio of the host star to nearby reference stars.References: Waldmann, I.P. 2012 ApJ, 747, 12, Waldamann, I. P. 2014 ApJ, 780, 23; Morello G. 2015 ApJ, 806

  5. Independent component analysis for underwater lidar clutter rejection

    NASA Astrophysics Data System (ADS)

    Illig, David W.; Jemison, William D.; Mullen, Linda J.

    2016-05-01

    This work demonstrates a new statistical approach towards backscatter "clutter" rejection for continuous-wave underwater lidar systems: independent component analysis. Independent component analysis is a statistical signal processing technique which can separate a return of interest from clutter in a statistical domain. After highlighting the statistical processing concepts, we demonstrate that underwater lidar target and backscatter returns have very different distributions, facilitating their separation in a statistical domain. Example profiles are provided showing the results of this separation, and ranging experiment results are presented. In the ranging experiment, performance is compared to a more conventional frequency-domain filtering approach. Target tracking is maintained to 14.5 attenuation lengths in the laboratory test tank environment, a 2.5 attenuation length improvement over the baseline.

  6. Incorporating finite element analysis into component life and reliability

    NASA Technical Reports Server (NTRS)

    August, Richard; Zaretsky, Erwin V.

    1991-01-01

    A method for calculating a component's design survivability by incorporating finite element analysis and probabilistic material properties was developed. The method evaluates design parameters through direct comparisons of component survivability expressed in terms of Weibull parameters. The analysis was applied to a rotating disk with mounting bolt holes. The highest probability of failure occurred at, or near, the maximum shear stress region of the bolt holes. Distribution of failure as a function of Weibull slope affects the probability of survival. Where Weibull parameters are unknown for a rotating disk, it may be permissible to assume Weibull parameters, as well as the stress-life exponent, in order to determine the disk speed where the probability of survival is highest.

  7. Deuterium incorporation in biomass cell wall components by NMR analysis

    SciTech Connect

    Foston, Marcus B; McGaughey, Joseph; O'Neill, Hugh Michael; Evans, Barbara R; Ragauskas, Arthur J

    2012-01-01

    A commercially available deuterated kale sample was analyzed for deuterium incorporation by ionic liquid solution 2H and 1H nuclear magnetic resonance (NMR). This protocol was found to effectively measure the percent deuterium incorporation at 33%, comparable to the 31% value determined by combustion. The solution NMR technique also suggested by a qualitative analysis that deuterium is preferentially incorporated into the carbohydrate components of the kale sample.

  8. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    PubMed Central

    Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin

    2014-01-01

    Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728

  9. Guidelines for Design and Analysis of Large, Brittle Spacecraft Components

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1993-01-01

    There were two related parts to this work. The first, conducted at The Aerospace Corporation was to develop and define methods for integrating the statistical theory of brittle strength with conventional finite element stress analysis, and to carry out a limited laboratory test program to illustrate the methods. The second part, separately funded at Aerojet Electronic Systems Division, was to create the finite element postprocessing program for integrating the statistical strength analysis with the structural analysis. The second part was monitored by Capt. Jeff McCann of USAF/SMC, as Special Study No.11, which authorized Aerojet to support Aerospace on this work requested by NASA. This second part is documented in Appendix A. The activity at Aerojet was guided by the Aerospace methods developed in the first part of this work. This joint work of Aerospace and Aerojet stemmed from prior related work for the Defense Support Program (DSP) Program Office, to qualify the DSP sensor main mirror and corrector lens for flight as part of a shuttle payload. These large brittle components of the DSP sensor are provided by Aerojet. This document defines rational methods for addressing the structural integrity and safety of large, brittle, payload components, which have low and variable tensile strength and can suddenly break or shatter. The methods are applicable to the evaluation and validation of such components, which, because of size and configuration restrictions, cannot be validated by direct proof test.

  10. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  11. Multivariate concentration determination using principal component regression with residual analysis

    PubMed Central

    Keithley, Richard B.; Heien, Michael L.; Wightman, R. Mark

    2009-01-01

    Data analysis is an essential tenet of analytical chemistry, extending the possible information obtained from the measurement of chemical phenomena. Chemometric methods have grown considerably in recent years, but their wide use is hindered because some still consider them too complicated. The purpose of this review is to describe a multivariate chemometric method, principal component regression, in a simple manner from the point of view of an analytical chemist, to demonstrate the need for proper quality-control (QC) measures in multivariate analysis and to advocate the use of residuals as a proper QC method. PMID:20160977

  12. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  13. Efficient training of multilayer perceptrons using principal component analysis

    SciTech Connect

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-08-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior.

  14. Spectral Synthesis via Mean Field approach to Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ning; Su, Shan-Shan; Kong, Xu

    2016-03-01

    We apply a new statistical analysis technique, the Mean Field approach to Independent Component Analysis (MF-ICA) in a Bayseian framework, to galaxy spectral analysis. This algorithm can compress a stellar spectral library into a few Independent Components (ICs), and the galaxy spectrum can be reconstructed by these ICs. Compared to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, the MF-ICA approach offers a large improvement in efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter recovery for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters derived with galaxies from the Sloan Digital Sky Survey. We find that our MF-ICA method can not only fit the observed galaxy spectra efficiently, but can also accurately recover the physical parameters of galaxies. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find it can provide excellent fitting results for low signal-to-noise spectra.

  15. Identification of the isomers using principal component analysis (PCA) method

    NASA Astrophysics Data System (ADS)

    Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur

    2016-03-01

    In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.

  16. A comparative study of principal component analysis and independent component analysis in eddy current pulsed thermography data processing.

    PubMed

    Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W L

    2013-10-01

    Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection.

  17. A comparative study of principal component analysis and independent component analysis in eddy current pulsed thermography data processing

    NASA Astrophysics Data System (ADS)

    Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W. L.

    2013-10-01

    Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection.

  18. Representation for dialect recognition using topographic independent component analysis

    NASA Astrophysics Data System (ADS)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  19. Applications Of Nonlinear Principal Components Analysis To Behavioral Data.

    PubMed

    Hicks, M M

    1981-07-01

    A quadratic function was derived from variables believed to be nonlinearly related. The method was suggested by Gnanadesikan (1977) and based on an early paper of Karl Pearson (1901) (which gave rise to principal components), in which Pearson demonstrated that a plane of best fit to a system of points could be elicited from the elements of the eigenvector associated with the smallest eigenvalue of the covariance matrix. PMID:26815595

  20. Prediction of p38 map kinase inhibitory activity of 3, 4-dihydropyrido [3, 2-d] pyrimidone derivatives using an expert system based on principal component analysis and least square support vector machine.

    PubMed

    Shahlaei, M; Saghaie, L

    2014-01-01

    A quantitative structure-activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies.

  1. Prediction of p38 map kinase inhibitory activity of 3, 4-dihydropyrido [3, 2-d] pyrimidone derivatives using an expert system based on principal component analysis and least square support vector machine

    PubMed Central

    Shahlaei, M.; Saghaie, L.

    2014-01-01

    A quantitative structure–activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies. PMID:26339262

  2. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  3. Removing Milky Way from airglow images using principal component analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhenhua; Liu, Alan; Sivjee, Gulamabas G.

    2014-04-01

    Airglow imaging is an effective way to obtain atmospheric gravity wave information in the airglow layers in the upper mesosphere and the lower thermosphere. Airglow images are often contaminated by the Milky Way emission. To extract gravity wave parameters correctly, the Milky Way must be removed. The paper demonstrates that principal component analysis (PCA) can effectively represent the dominant variation patterns of the intensity of airglow images that are associated with the slow moving Milky Way features. Subtracting this PCA reconstructed field reveals gravity waves that are otherwise overwhelmed by the strong spurious waves associated with the Milky Way. Numerical experiments show that nonstationary gravity waves with typical wave amplitudes and persistences are not affected by the PCA removal because the variances contributed by each wave event are much smaller than the ones in the principal components.

  4. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  5. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  6. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    SciTech Connect

    Whalen, Michael R.; /Stevens Tech. /SLAC

    2011-06-22

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/{sup o}C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  7. Analysis of Residual Dependencies of Independent Components Extracted from fMRI Data.

    PubMed

    Vanello, N; Ricciardi, E; Landini, L

    2016-01-01

    Independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data can be employed as an exploratory method. The lack in the ICA model of strong a priori assumptions about the signal or about the noise leads to difficult interpretations of the results. Moreover, the statistical independence of the components is only approximated. Residual dependencies among the components can reveal informative structure in the data. A major problem is related to model order selection, that is, the number of components to be extracted. Specifically, overestimation may lead to component splitting. In this work, a method based on hierarchical clustering of ICA applied to fMRI datasets is investigated. The clustering algorithm uses a metric based on the mutual information between the ICs. To estimate the similarity measure, a histogram-based technique and one based on kernel density estimation are tested on simulated datasets. Simulations results indicate that the method could be used to cluster components related to the same task and resulting from a splitting process occurring at different model orders. Different performances of the similarity measures were found and discussed. Preliminary results on real data are reported and show that the method can group task related and transiently task related components. PMID:26839530

  8. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  9. Principal components granulometric analysis of tidally dominated depositional environments

    SciTech Connect

    Mitchell, S.W. ); Long, W.T. ); Friedrich, N.E. )

    1991-02-01

    Sediments often are investigated by using mechanical sieve analysis (at 1/4 or 1/2{phi} intervals) to identify differences in weight-percent distributions between related samples, and thereby, to deduce variations in sediment sources and depositional processes. Similar granulometric data from groups of surface samples from two siliciclastic estuaries and one carbonate tidal creek have been clustered using principal components analysis. Subtle geographic trends in tidally dominated depositional processes and in sediment sources can be inferred from the clusters. In Barnstable Harbor, Cape Cod, Massachusetts, the estuary can be subdivided into five major subenvironments, with tidal current intensities/directions and sediment sources (longshore transport or sediments weathering from the Sandwich Moraine) as controls. In Morro Bay, San Luis Obispo county, California, all major environments (beach, dune, bay, delta, and fluvial) can be easily distinguished; a wide variety of subenvironments can be recognized. On Pigeon Creek, San Salvador Island, Bahamas, twelve subenvironments can be recognized. Biogenic (Halimeda, Peneroplios, mixed skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis of eroding Pleistocene limestone) are grain types which dominate. When combined with tidal current intensities/directions, grain sources produce subenvironments distributed parallel to tidal channels. The investigation of the three modern environments indicates that principal components granulometric analysis is potentially a useful tool in recognizing subtle changes in transport processes and sediment sources preserved in ancient depositional sequences.

  10. Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis

    PubMed Central

    Rochon, Kateryn; Duehl, Adrian J.; Anderson, John F.; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C.; Obenauer, Peter J.; Campbell, James F.; Lysyk, Tim J.; Allan, Sandra A.

    2015-01-01

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthropod monitoring technology, techniques, and analysis” presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles. PMID:26543242

  11. A component based implementation of agents and brokers for design coordination

    NASA Technical Reports Server (NTRS)

    Weidner, R. J.

    2001-01-01

    NASA's mission design coordination has been based on expert opinion of parametric data presented in Excel or Powerpoint. Common access is required to more powerful design tools supporting performance simulation and analysis. Components provide the means for inexpensively adding the desired functionality.

  12. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  13. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  14. Weighted EMPCA: Weighted Expectation Maximization Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Bailey, Stephen

    2016-09-01

    Weighted EMPCA performs principal component analysis (PCA) on noisy datasets with missing values. Estimates of the measurement error are used to weight the input data such that the resulting eigenvectors, when compared to classic PCA, are more sensitive to the true underlying signal variations rather than being pulled by heteroskedastic measurement noise. Missing data are simply limiting cases of weight = 0. The underlying algorithm is a noise weighted expectation maximization (EM) PCA, which has additional benefits of implementation speed and flexibility for smoothing eigenvectors to reduce the noise contribution.

  15. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  16. Independent component analysis for improving the quality of interferometric products

    NASA Astrophysics Data System (ADS)

    Saqellari Likoka, A.; Vafeiadi-Bila, E.; Karathanassi, V.

    2016-05-01

    The accuracy of InSAR DEMs is affected by the temporal decorrelation of SAR images which is due to atmosphere, land use/cover, soil moisture, and roughness changes. Elimination of the temporal decorrelation of the master and slave image improves the DEMs accuracy. In this study, the Independent Component Analysis was applied before interferometric process. It was observed that using three ICA entries, ICA independent sources can be interpreted as background and changed images. ICA when performed on the master and slave images using the same couple of additional images produces two background images which enable the production of high quality DEMs. However, limitations exist in the proposed approach.

  17. Multiplex component-based allergen microarray in recent clinical studies.

    PubMed

    Patelis, A; Borres, M P; Kober, A; Berthold, M

    2016-08-01

    During the last decades component-resolved diagnostics either as singleplex or multiplex measurements has been introduced into the field of clinical allergology, providing important information that cannot be obtained from extract-based tests. Here we review recent studies that demonstrate clinical applications of the multiplex microarray technique in the diagnosis and risk assessment of allergic patients, and its usefulness in studies of allergic diseases. The usefulness of ImmunoCAP ISAC has been validated in a wide spectrum of allergic diseases like asthma, allergic rhinoconjunctivitis, atopic dermatitis, eosinophilic esophagitis, food allergy and anaphylaxis. ISAC provides a broad picture of a patient's sensitization profile from a single test, and provides information on specific and cross-reactive sensitizations that facilitate diagnosis, risk assessment, and disease management. Furthermore, it can reveal unexpected sensitizations which may explain anaphylaxis previously categorized as idiopathic and also display for the moment clinically non-relevant sensitizations. ISAC can facilitate a better selection of relevant allergens for immunotherapy compared with extract testing. Microarray technique can visualize the allergic march and molecular spreading in the preclinical stages of allergic diseases, and may indicate that the likelihood of developing symptomatic allergy is associated with specific profiles of sensitization to allergen components. ISAC is shown to be a useful tool in routine allergy diagnostics due to its ability to improve risk assessment, to better select relevant allergens for immunotherapy as well as detecting unknown sensitization. Multiplex component testing is especially suitable for patients with complex symptomatology. PMID:27196983

  18. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  19. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  20. Optimization design and simulation analysis for the key components of 1m aperture photoelectric theodolite

    NASA Astrophysics Data System (ADS)

    San, Xiao-gang; Qiao, Yan-feng; Yu, Shuaibei; Wang, Tao; Tang, Jie

    2014-09-01

    Taking a 1m aperture photoelectric theodolite as study object, its key components including four-way, turntable and base are structural optimized so as to improve structural rigidity while reducing structural mass. First, various components' working characteristics and relationships with the other parts are studied, based on these, reasonable finite element model of these components are established, then each component's optimal material topology are obtained by continuum topology optimization. According to structural topology, lightweight truss structure models are constructed and the models' key parameters are optimized in size. Finally, the structures optimized are verified by finite element analysis. Analysis prove that comparing to traditional structure, lightweight structures of theodolite's three key components can reduce mass up to 1095.2kg, and increase ratio of stiffness to mass. Meanwhile, for other indexes such as maximum stress, static deformation and first-order natural frequency, lightweight structures also have better performance than traditional structure. After alignment, angular shaking error of theodolite's horizontal axis is tested by autocollimator, the results are: maximum error is υ =1.82″, mean square error is σ =0.62″. Further, angular shaking error of theodolite's vertical axis is tested by 0.2″ gradienter, the results are: maximum error is υ =1.97″, mean square error is σ =0.706″. The results of all these analysis and tests fully prove that the optimized lightweight key components of this 1m aperture theodolite are reasonable and effective to satisfy this instrument's requirements.

  1. [Discrimination of Red Tide algae by fluorescence spectra and principle component analysis].

    PubMed

    Su, Rong-guo; Hu, Xu-peng; Zhang, Chuan-song; Wang, Xiu-lin

    2007-07-01

    Fluorescence discrimination technology for 11 species of the Red Tide algae at genus level was constructed by principle component analysis and non-negative least squares. Rayleigh and Raman scattering peaks of 3D fluorescence spectra were eliminated by Delaunay triangulation method. According to the results of Fisher linear discrimination, the first principle component score and the second component score of 3D fluorescence spectra were chosen as discriminant feature and the feature base was established. The 11 algae species were tested, and more than 85% samples were accurately determinated, especially for Prorocentrum donghaiense, Skeletonema costatum, Gymnodinium sp., which have frequently brought Red tide in the East China Sea. More than 95% samples were right discriminated. The results showed that the genus discriminant feature of 3D fluorescence spectra of Red Tide algae given by principle component analysis could work well.

  2. The Evaluation and Research of Multi-Project Programs: Program Component Analysis.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    1977-01-01

    It is difficult to base evaluations on concepts irrelevant to state policy making. Evaluation of a multiproject program requires both time and differentiation of method. Data from the California Early Childhood Program illustrate process variables for program component analysis, and research questions for intraprogram comparison. (CP)

  3. Analysis of Fission Products on the AGR-1 Capsule Components

    SciTech Connect

    Paul A. Demkowicz; Jason M. Harp; Philip L. Winston; Scott A. Ploger

    2013-03-01

    The components of the AGR-1 irradiation capsules were analyzed to determine the retained inventory of fission products in order to determine the extent of in-pile fission product release from the fuel compacts. This includes analysis of (i) the metal capsule components, (ii) the graphite fuel holders, (iii) the graphite spacers, and (iv) the gas exit lines. The fission products most prevalent in the components were Ag-110m, Cs 134, Cs 137, Eu-154, and Sr 90, and the most common location was the metal capsule components and the graphite fuel holders. Gamma scanning of the graphite fuel holders was also performed to determine spatial distribution of Ag-110m and radiocesium. Silver was released from the fuel components in significant fractions. The total Ag-110m inventory found in the capsules ranged from 1.2×10 2 (Capsule 3) to 3.8×10 1 (Capsule 6). Ag-110m was not distributed evenly in the graphite fuel holders, but tended to concentrate at the axial ends of the graphite holders in Capsules 1 and 6 (located at the top and bottom of the test train) and near the axial center in Capsules 2, 3, and 5 (in the center of the test train). The Ag-110m further tended to be concentrated around fuel stacks 1 and 3, the two stacks facing the ATR reactor core and location of higher burnup, neutron fluence, and temperatures compared with Stack 2. Detailed correlation of silver release with fuel type and irradiation temperatures is problematic at the capsule level due to the large range of temperatures experienced by individual fuel compacts in each capsule. A comprehensive Ag 110m mass balance for the capsules was performed using measured inventories of individual compacts and the inventory on the capsule components. For most capsules, the mass balance was within 11% of the predicted inventory. The Ag-110m release from individual compacts often exhibited a very large range within a particular capsule.

  4. Analysis of three-component ambient vibration array measurements

    NASA Astrophysics Data System (ADS)

    Fäh, Donat; Stamm, Gabriela; Havenith, Hans-Balder

    2008-01-01

    Both synthetic and observed ambient vibration array data are analysed using high-resolution beam-forming. In addition to a classical analysis of the vertical component, this paper presents results derived from processing horizontal components. We analyse phase velocities of fundamental and higher mode Rayleigh and Love waves, and particle motions (ellipticity) retrieved from H/V spectral ratios. A combined inversion with a genetic algorithm and a strategy for selecting possible model parameters allow us to define structural models explaining the data. The results from synthetic data for simple models with one or two layers of sediments suggest that, in most cases, the number of layers has to be reduced to a few sediment strata to find the original structure. Generally, reducing the number of soft-sediment layers in the inversion process with genetic algorithms leads to a class of models that are less smooth. They have a stronger impedance contrast between sediments and bedrock. Combining Love and Rayleigh wave dispersion curves with the ellipticity of the fundamental mode Rayleigh waves has some advantages. Scatter is reduced when compared to using structural models obtained only from Rayleigh wave phase velocity curves. By adding information from Love waves some structures can be excluded. Another possibility for constraining inversion results is to include supplementary geological or borehole information. Analysing radial components also can provide segments of Rayleigh wave dispersion curves for modes not seen on the vertical component. Finally, using ellipticity information allows us to confine the total depth of the soft sediments. For real sites, considerable variability in the measured phase velocity curves is observed. This comes from lateral changes in the structure or seismic sources within the array. Constraining the inversion by combining Love and Rayleigh wave information can help reduce such problems. Frequency bands in which the Rayleigh wave

  5. The 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Mcknight, R. L.

    1983-01-01

    The objective of this research is to develop an analytical tool capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. The techniques developed must be capable of accommodating large excursions in temperatures with the associated variations in material properties including plasticity and creep. The overall objective of this proposed program is to develop advanced 3-D inelastic structural/stress analysis methods and solution strategies for more accurate and yet more cost effective analysis of combustors, turbine blades, and vanes. The approach will be to develop four different theories, one linear and three higher order with increasing complexities including embedded singularities.

  6. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; Moseley, S. H.; Porst, J.-P.; Porter, F. S.; Sadleir, J. E.; Smith, S. J.

    2016-07-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an ^{55}Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  7. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle. PMID:14667134

  8. Anisoplanatic Imaging Through Turbulence Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Baena-Gallé, R.; Katsaggelos, A.; Molina, R.; Mateos, J.; Gladysz, S.

    The performance of optical systems is highly degraded by atmospheric turbulence when observing both vertically (e.g., astronomy, remote sensing) or horizontally (long-range surveillance). This problem can be partially alleviated using adaptive optics (AO) but only for small fields of view (FOV) described by the isoplanatic angle for which the turbulence-induced aberrations are considered constant. Additionally, this problem can also be tackled using post-processing techniques such as deconvolution algorithms which take into account the variability of the point spread function (PSF) in anisoplanatic conditions. Variability of the PSF across the FOV in anisoplanatc imagery can be described using principal component analysis (Karhunen-Loeve transform). Then, a certain number of variable PSFs can be used to create new basis functions, called principal components (PC), which can be considered constant across the FOV and, therefore, potentially be used to perform global deconvolution. Our aim is twofold: firstly, to describe the shape and statistics of the anisoplanatic PSF for single-conjugate AO systems with only a few parameters and, secondly, using this information to obtain the set of PSFs at positions in the FOV so that the associated variability is properly described. Additionally, these PSFs are to be decomposed into PCs. Finally, the entire FOV is deconvolved globally using deconvolution algorithms which account for uncertainties involved in local estimates of the PSFs. Our approach is tested on simulated, single-conjugate AO data.

  9. Dynamic heart rate estimation using principal component analysis.

    PubMed

    Yu, Yong-Poh; Raveendran, P; Lim, Chern-Loon; Kwan, Ban-Hoe

    2015-11-01

    In this paper, facial images from various video sequences are used to obtain a heart rate reading. In this study, a video camera is used to capture the facial images of eight subjects whose heart rates vary dynamically, between 81 and 153 BPM. Principal component analysis (PCA) is used to recover the blood volume pulses (BVP) which can be used for the heart rate estimation. An important consideration for accuracy of the dynamic heart rate estimation is to determine the shortest video duration that realizes it. This video duration is chosen when the six principal components (PC) are least correlated amongst them. When this is achieved, the first PC is used to obtain the heart rate. The results obtained from the proposed method are compared to the readings obtained from the Polar heart rate monitor. Experimental results show the proposed method is able to estimate the dynamic heart rate readings using less computational requirements when compared to the existing method. The mean absolute error and the standard deviation of the absolute errors between experimental readings and actual readings are 2.18 BPM and 1.71 BPM respectively.

  10. Sensitivity analysis on an AC600 aluminum skin component

    NASA Astrophysics Data System (ADS)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  11. Dynamic heart rate estimation using principal component analysis

    PubMed Central

    Yu, Yong-Poh; Raveendran, P.; Lim, Chern-Loon; Kwan, Ban-Hoe

    2015-01-01

    In this paper, facial images from various video sequences are used to obtain a heart rate reading. In this study, a video camera is used to capture the facial images of eight subjects whose heart rates vary dynamically, between 81 and 153 BPM. Principal component analysis (PCA) is used to recover the blood volume pulses (BVP) which can be used for the heart rate estimation. An important consideration for accuracy of the dynamic heart rate estimation is to determine the shortest video duration that realizes it. This video duration is chosen when the six principal components (PC) are least correlated amongst them. When this is achieved, the first PC is used to obtain the heart rate. The results obtained from the proposed method are compared to the readings obtained from the Polar heart rate monitor. Experimental results show the proposed method is able to estimate the dynamic heart rate readings using less computational requirements when compared to the existing method. The mean absolute error and the standard deviation of the absolute errors between experimental readings and actual readings are 2.18 BPM and 1.71 BPM respectively. PMID:26601022

  12. Revisiting AVHRR Tropospheric Aerosol Trends Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-01-01

    The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.

  13. Demixed principal component analysis of neural population data

    PubMed Central

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-01-01

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378

  14. Derivation of Boundary Manikins: A Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  15. Prognostic Health Monitoring System: Component Selection Based on Risk Criteria and Economic Benefit Assessment

    SciTech Connect

    Binh T. Pham; Vivek Agarwal; Nancy J Lybeck; Magdy S Tawfik

    2012-05-01

    Prognostic health monitoring (PHM) is a proactive approach to monitor the ability of structures, systems, and components (SSCs) to withstand structural, thermal, and chemical loadings over the SSCs planned service lifespans. The current efforts to extend the operational license lifetime of the aging fleet of U.S. nuclear power plants from 40 to 60 years and beyond can benefit from a systematic application of PHM technology. Implementing a PHM system would strengthen the safety of nuclear power plants, reduce plant outage time, and reduce operation and maintenance costs. However, a nuclear power plant has thousands of SSCs, so implementing a PHM system that covers all SSCs requires careful planning and prioritization. This paper therefore focuses on a component selection that is based on the analysis of a component's failure probability, risk, and cost. Ultimately, the decision on component selection depend on the overall economical benefits arising from safety and operational considerations associated with implementing the PHM system.

  16. Component analysis of dental porcelain for assisting dental identification.

    PubMed

    Aboshi, H; Takahashi, T; Komuro, T

    2006-12-01

    The fluorescence of porcelain crowns recovered from the mouth of an unknown murder victim, and several control porcelain samples, were examined by fluorescent examination lamps. The fluorescence from two of the control samples was quite similar to that from the porcelain crowns recovered from the victim. To increase the objectivity of the results by quantitative analysis, the composition of each porcelain crown and control sample was also evaluated by wave dispersion X-ray microanalyser. The elements detected from the porcelain crowns of the victim matched those of two of the porcelain samples. Later, the antemortem dental records and radiographs of the victim were obtained through a dentist, who had recognized the name of the porcelain manufacturer in a postmortem dental information request placed on the Japanese Dental Association web page. Although component analysis of dental porcelain may be an effective means of assisting dental identification, a more rapid and non-destructive analysis for detecting the elements is required. The energy dispersive X-ray fluorescence (EDXRF) spectrometer was used for a pilot study of identification of porcelain composition.

  17. Analysis of Performance of Jet Engine from Characteristics of Components II : Interaction of Components as Determined from Engine Operation

    NASA Technical Reports Server (NTRS)

    Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl

    1949-01-01

    In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.

  18. Efficacy-oriented compatibility for component-based Chinese medicine.

    PubMed

    Zhang, Jun-hua; Zhu, Yan; Fan, Xiao-hui; Zhang, Bo-li

    2015-06-01

    Single-target drugs have not achieved satisfactory therapeutic effects for complex diseases involving multiple factors. Instead, innovations in recent drug research and development have revealed the emergence of compound drugs, such as cocktail therapies and "polypills", as the frontier in new drug development. A traditional Chinese medicine (TCM) prescription that is usually composed of several medicinal herbs can serve a typical representative of compound medicines. Although the traditional compatibility theory of TCM cannot be well expressed using modern scientific language nowadays, the fundamental purpose of TCM compatibility can be understood as promoting efficacy and reducing toxicity. This paper introduces the theory and methods of efficacy-oriented compatibility for developing component-based Chinese medicines.

  19. A Local Learning Rule for Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Isomura, Takuya; Toyoizumi, Taro

    2016-06-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering.

  20. Independent component analysis (ICA) using wavelet subband orthogonality

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Yamakawa, Takeshi

    1998-03-01

    There are two kinds of RRP: (1) invertible ones, such as global Fourier transform (FT), local wavelet transform (WT), and adaptive wavelet transform (AWT); and (2) non-invertible ones, e.g. ICA including the global principle component analysis (PCA). The invertible FT and WT can be related to the non-invertible ICA when the continuous transforms are approximate din discrete matrix-vector operations. The landmark accomplishment of ICA is to obtain, by unsupervised learning algorithm, the edge-map as image feature ayields, shown by Helsinki researchers using fourth order statistics of nyields -- Kurosis K(uyields), and derived from information- theoretical first principle is augmented by the orthogonality property of the DWT subband used necessarily for usual image compression. If we take the advantage of the subband decorrelation, we have potentially an efficient utilization of a pari of communication channels if we could send several more mixed subband images through the pair of channels.

  1. Functional principal components analysis of workload capacity functions

    PubMed Central

    Burns, Devin M.; Houpt, Joseph W.; Townsend, James T.; Endres, Michael J.

    2013-01-01

    Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals. PMID:23475829

  2. A Local Learning Rule for Independent Component Analysis

    PubMed Central

    Isomura, Takuya; Toyoizumi, Taro

    2016-01-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661

  3. Quadrature component analysis of interferograms with random phase shifts

    NASA Astrophysics Data System (ADS)

    Xu, Jiancheng; Chen, Zhao

    2014-08-01

    Quadrature component analysis (QCA) is an effective method for analyzing the interferograms if the phase shifts are uniformly distributed in the [0, 2π] range. However, it is hard to meet this requirement in practical applications, so a parameter named the non-orthogonal degree (NOD) is proposed to indicate the degree when the phase shifts are not well distributed. We analyze the relation between the parameter of NOD and the accuracy of the QCA algorithm by numerical simulation. By using the parameter of NOD, the relation between the distribution of the phase shift and the accuracy of the QCA algorithm is obtained. The relation is discussed and verified by numerical simulations and experiments.

  4. Diagnosis of nonlinear systems using kernel principal component analysis

    NASA Astrophysics Data System (ADS)

    Kallas, M.; Mourot, G.; Maquin, D.; Ragot, J.

    2014-12-01

    Technological advances in the process industries during the past decade have resulted in increasingly complicated processes, systems and products. Therefore, recent researches consider the challenges in their design and management for successful operation. While principal component analysis (PCA) technique is widely used for diagnosis, its structure cannot describe nonlinear related variables. Thus, an extension to the case of nonlinear systems is presented in a feature space for process monitoring. Working in a high-dimensional feature space, it is necessary to get back to the original space. Hence, an iterative pre-image technique is derived to provide a solution for fault diagnosis. The relevance of the proposed technique is illustrated on artificial and real dataset.

  5. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis. PMID:21495371

  6. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  7. Swift model for a lower heating value prediction based on wet-based physical components of municipal solid waste.

    PubMed

    Lin, Chien-Jung; Chyan, Jih-Ming; Chen, I-Ming; Wang, Yi-Tun

    2013-02-01

    To establish an empirical model for predicting a lower heating value (LHV) easily and economically by multiple regression analysis. A wet-based physical components model (WBPCM) was developed and based on physical component analysis without dewatering. Based on 497 samples of municipal solid waste (MSW) gathered from 14 incinerators in western parts of Taiwan from 2002 to 2009. The proposed model was verified by independent samples from other incinerators through parameters multiple correlation coefficients (R), relative percentage deviation (RPD) and mean absolute percentage error (MAPE). Experimental results indicated that R, RPD and MAPE were 0.976, 17.1 and 17.7, respectively. This finding implies that LHV predicted by the WBPCM could well explain the LHV characteristics of MSW. The WBPCM was also compared with existing prediction models of LHV on a dry basis. While more accurately predicting LHV predicting than those models based on proximate analysis, the WBPCM was comparable with models based on physical component analysis in term of RPD and MAPE. Experimental results further indicated that the prediction accuracy of the WBPCM varied with MSW moisture parabolically. No specific relation was observed in the results of the previous prediction model. The accuracy of the WBPCM was almost approached to that of ultimate analysis in moisture ranging from 40% to 55%. The model was applicable within this moisture range. We conclude that the WBPCM is a faster and more economical model for LHV predictions with comparable accuracy than those models based on physical component analysis. The proposed WBPCM is highly promising for use in designing and operating incinerators.

  8. Blind separation of human- and horse-footstep signatures using independent component analysis

    NASA Astrophysics Data System (ADS)

    Mehmood, Asif; Damarla, Thyagaraju

    2012-06-01

    Seismic footstep detection based systems for homeland security applications are important to perimeter protection and other security systems. This paper reports seismic footstep signal separation for a walking horse and a walking human. The well-known Independent Component Analysis (ICA) approach is employed to accomplish this task. ICA techniques have become widely used in audio analysis and source separation. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. They can also be used in conjunction with a classification method to achieve a high percentage of correct classification and reduce false alarms. In this paper, an ICA based algorithm is developed and implemented on seismic data of human and horse footsteps. The performance of this method is very promising and is demonstrated by the experimental results.

  9. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  10. Analysis of Femoral Components of Cemented Total Hip Arthroplasty

    NASA Astrophysics Data System (ADS)

    Singh, Shantanu; Harsha, A. P.

    2015-10-01

    There have been continuous on-going revisions in design of prosthesis in Total Hip Arthroplasty (THA) to improve the endurance of hip replacement. In the present work, Finite Element Analysis was performed on cemented THA with CoCrMo trapezoidal, CoCrMo circular, Ti6Al4V trapezoidal and Ti6Al4V circular stem. It was observed that cross section and material of femoral stem proved to be critical parameters for stress distribution in femoral components, distribution of interfacial stress and micro movements. In the first part of analysis, designs were investigated for micro movements and stress developed, for different stem materials. Later part of the analysis focused on investigations with respect to different stem cross sections. Femoral stem made of Titanium alloy (Ti6Al4V) resulted in larger debonding of stem at cement-stem interface and increased stress within the cement mantle in contrast to chromium alloy (CoCrMo) stem. Thus, CoCrMo proved to be a better choice for cemented THA. Comparison between CoCrMo femoral stem of trapezium and circular cross section showed that trapezoidal stem experiences lesser sliding and debonding at interfaces than circular cross section stem. Also, trapezium cross section generated lower peak stress in femoral stem and cortical femur. In present study, femur head with diameter of 36 mm was considered for the analysis in order to avoid dislocation of the stem. Also, metallic femur head was coupled with cross linked polyethylene liner as it experiences negligible wear compared to conventional polyethylene liner and unlike metallic liner it is non carcinogenic.

  11. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  12. 78 FR 6344 - Certain Wireless Communications Base Stations and Components Thereof Notice of Receipt of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... COMMISSION Certain Wireless Communications Base Stations and Components Thereof Notice of Receipt of... received a complaint entitled Certain Wireless Communications Base Stations and Components Thereof, DN 2934... the sale within the United States after importation of certain wireless communications base...

  13. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    PubMed

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  14. Principal component analysis of indocyanine green fluorescence dynamics for diagnosis of vascular diseases

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee

    2015-03-01

    Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.

  15. Residual Strength Analysis Methodology: Laboratory Coupons to Structural Components

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Newman, J. C., Jr.; Starnes, J. H., Jr.; Rose, C. A.; Young, R. D.; Seshadri, B. R.

    2000-01-01

    The NASA Aircraft Structural Integrity (NASIP) and Airframe Airworthiness Assurance/Aging Aircraft (AAA/AA) Programs have developed a residual strength prediction methodology for aircraft fuselage structures. This methodology has been experimentally verified for structures ranging from laboratory coupons up to full-scale structural components. The methodology uses the critical crack tip opening angle (CTOA) fracture criterion to characterize the fracture behavior and a material and a geometric nonlinear finite element shell analysis code to perform the structural analyses. The present paper presents the results of a study to evaluate the fracture behavior of 2024-T3 aluminum alloys with thickness of 0.04 inches to 0.09 inches. The critical CTOA and the corresponding plane strain core height necessary to simulate through-the-thickness effects at the crack tip in an otherwise plane stress analysis, were determined from small laboratory specimens. Using these parameters, the CTOA fracture criterion was used to predict the behavior of middle crack tension specimens that were up to 40 inches wide, flat panels with riveted stiffeners and multiple-site damage cracks, 18-inch diameter pressurized cylinders, and full scale curved stiffened panels subjected to internal pressure and mechanical loads.

  16. Kernel principal component analysis for stochastic input model generation

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2011-08-10

    Highlights: {yields} KPCA is used to construct a reduced order stochastic model of permeability. {yields} A new approach is proposed to solve the pre-image problem in KPCA. {yields} Polynomial chaos is used to provide a parametric stochastic input model. {yields} Flow in porous media with channelized permeability is considered. - Abstract: Stochastic analysis of random heterogeneous media provides useful information only if realistic input models of the material property variations are used. These input models are often constructed from a set of experimental samples of the underlying random field. To this end, the Karhunen-Loeve (K-L) expansion, also known as principal component analysis (PCA), is the most popular model reduction method due to its uniform mean-square convergence. However, it only projects the samples onto an optimal linear subspace, which results in an unreasonable representation of the original data if they are non-linearly related to each other. In other words, it only preserves the first-order (mean) and second-order statistics (covariance) of a random field, which is insufficient for reproducing complex structures. This paper applies kernel principal component analysis (KPCA) to construct a reduced-order stochastic input model for the material property variation in heterogeneous media. KPCA can be considered as a nonlinear version of PCA. Through use of kernel functions, KPCA further enables the preservation of higher-order statistics of the random field, instead of just two-point statistics as in the standard Karhunen-Loeve (K-L) expansion. Thus, this method can model non-Gaussian, non-stationary random fields. In this work, we also propose a new approach to solve the pre-image problem involved in KPCA. In addition, polynomial chaos (PC) expansion is used to represent the random coefficients in KPCA which provides a parametric stochastic input model. Thus, realizations, which are statistically consistent with the experimental data, can be

  17. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements.

  18. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The contributions of three major gasoline blending components (reformate, alkylate and cracked gasoline) to potential environmental impacts are assessed. This study estimates losses of the gasoline blending components due to evaporation and leaks through their life cycle, from pe...

  19. Biochemical component identification by light scattering techniques in whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-03-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins (albumin, interferon, C reactive protein), microelements (Na+, Ca+), antibiotic of different generations, in both single and multi component solutions under varied in wide range concentration are represented. Analysis has been performed on the light scattering parameters of whispering gallery mode (WGM) optical resonance based sensor with dielectric microspheres from glass and PMMA as sensitive elements fixed by spin - coating techniques in adhesive layer on the surface of substrate or directly on the coupling element. Sensitive layer was integrated into developed fluidic cell with a digital syringe. Light from tuneable laser strict focusing on and scattered by the single microsphere was detected by a CMOS camera. The image was filtered for noise reduction and integrated on two coordinates for evaluation of integrated energy of a measured signal. As the entrance data following signal parameters were used: relative (to a free spectral range) spectral shift of frequency of WGM optical resonance in microsphere and relative efficiency of WGM excitation obtained within a free spectral range which depended on both type and concentration of investigated agents. Multiplexing on parameters and components has been realized using spatial and spectral parameters of scattered by microsphere light with developed data processing. Biochemical component classification and identification of agents under investigation has been performed by network analysis techniques based on probabilistic network and multilayer perceptron. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis.

  20. Spatial control of groundwater contamination, using principal component analysis

    NASA Astrophysics Data System (ADS)

    Rao, N. Subba

    2014-06-01

    A study on the geochemistry of groundwater was carried out in a river basin of Andhra Pradesh to probe into the spatial controlling processes of groundwater contamination, using principal component analysis (PCA). The PCA transforms the chemical variables, pH, EC, Ca2+, Mg2+, Na+, K+, HCO, Cl-, SO, NO and F-, into two orthogonal principal components (PC1 and PC2), accounting for 75% of the total variance of the data matrix. PC1 has high positive loadings of EC, Na+, Cl-, SO, Mg2+ and Ca2+, representing a salinity controlled process of geogenic (mineral dissolution, ion exchange, and evaporation), anthropogenic (agricultural activities and domestic wastewaters), and marine (marine clay) origin. The PC2 loadings are highly positive for HCO , F-, pH and NO, attributing to the alkalinity and pollution controlled processes of geogenic and anthropogenic origins. The PC scores reflect the change of groundwater quality of geogenic origin from upstream to downstream area with an increase in concentration of chemical variables, which is due to anthropogenic and marine origins with varying topography, soil type, depth of water levels, and water usage. Thus, the groundwater quality shows a variation of chemical facies from Na+ > Ca2+ > Mg2+ > K+: HCO > Cl- > SO NO > F-at high topography to Na+ > Mg2+ > Ca2+ > K+: Cl- > HCO > SO NO > F- at low topography. With PCA, an effective tool for the spatial controlling processes of groundwater contamination, a subset of explored wells is indexed for continuous monitoring to optimize the expensive effort.

  1. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  2. The Use of Exploratory Factor Analysis and Principal Components Analysis in Communication Research.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Dailey, Rene; Lemus, Daisy

    2002-01-01

    Discusses the distinct purposes of principal components analysis (PCA) and exploratory factor analysis (EFA), using two data sets as examples. Reviews the use of each technique in three major communication journals: "Communication Monographs,""Human Communication Research," and "Communication Research." Finds that the use of EFA and PCA indicates…

  3. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. PMID:27227487

  4. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc.

  5. Assessment of models for pedestrian dynamics with functional principal component analysis

    NASA Astrophysics Data System (ADS)

    Chraibi, Mohcine; Ensslen, Tim; Gottschalk, Hanno; Saadi, Mohamed; Seyfried, Armin

    2016-06-01

    Many agent based simulation approaches have been proposed for pedestrian flow. As such models are applied e.g. in evacuation studies, the quality and reliability of such models is of vital interest. Pedestrian trajectories are functional data and thus functional principal component analysis is a natural tool to assess the quality of pedestrian flow models beyond average properties. In this article we conduct functional Principal Component Analysis (PCA) for the trajectories of pedestrians passing through a bottleneck. In this way it is possible to assess the quality of the models not only on basis of average values but also by considering its fluctuations. We benchmark two agent based models of pedestrian flow against the experimental data using PCA average and stochastic features. Functional PCA proves to be an efficient tool to detect deviation between simulation and experiment and to assess quality of pedestrian models.

  6. Fully automated diabetic retinopathy screening using morphological component analysis.

    PubMed

    Imani, Elaheh; Pourreza, Hamid-Reza; Banaee, Touka

    2015-07-01

    Diabetic retinopathy is the major cause of blindness in the world. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This diagnosis can be made through regular screening and timely treatment. Besides, automation of this process can significantly reduce the work of ophthalmologists and alleviate inter and intra observer variability. This paper provides a fully automated diabetic retinopathy screening system with the ability of retinal image quality assessment. The novelty of the proposed method lies in the use of Morphological Component Analysis (MCA) algorithm to discriminate between normal and pathological retinal structures. To this end, first a pre-screening algorithm is used to assess the quality of retinal images. If the quality of the image is not satisfactory, it is examined by an ophthalmologist and must be recaptured if necessary. Otherwise, the image is processed for diabetic retinopathy detection. In this stage, normal and pathological structures of the retinal image are separated by MCA algorithm. Finally, the normal and abnormal retinal images are distinguished by statistical features of the retinal lesions. Our proposed system achieved 92.01% sensitivity and 95.45% specificity on the Messidor dataset which is a remarkable result in comparison with previous work.

  7. Multi-class stain separation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Trahearn, Nicholas; Snead, David; Cree, Ian; Rajpoot, Nasir

    2015-03-01

    Stain separation is the process whereby a full colour histology section image is transformed into a series of single channel images, each corresponding to a given stain's expression. Many algorithms in the field of digital pathology are concerned with the expression of a single stain, thus stain separation is a key preprocessing step in these situations. We present a new versatile method of stain separation. The method uses Independent Component Analysis (ICA) to determine a set of statistically independent vectors, corresponding to the individual stain expressions. In comparison to other popular approaches, such as PCA and NNMF, we found that ICA gives a superior projection of the data with respect to each stain. In addition, we introduce a correction step to improve the initial results provided by the ICA coefficients. Many existing approaches only consider separation of two stains, with primary emphasis on Haematoxylin and Eosin. We show that our method is capable of making a good separation when there are more than two stains present. We also demonstrate our method's ability to achieve good separation on a variety of different stain types.

  8. Target detection in FLIR imagery using independent component analysis

    NASA Astrophysics Data System (ADS)

    Sadeque, A. Z.; Alam, M. S.

    2006-05-01

    In this paper, we propose a target detection algorithm in FLIR imagery using independent component analysis (ICA). Here FLIR images of some real targets with practical background regions are used for training. Dimension of the training regions is chosen depending on the size of the target. After performing ICA transformation on these training images, we obtain a ICA matrix, where each row gives the transformed version of the previous matrix, and a weight matrix. Using these matrices, a transformed matrix of the input image can be found with enhanced features. Then cosine of the angle between the training and test vectors is employed as the parameter for detecting the unknown target. A test region is selected from the first frame of FLIR image, which is of the same size as the training region. This region is transformed following the proposed algorithm and then the cosine value is measured between this transformed vector and the corresponding vector of the transformed training matrix. Next the test region is shifted by one pixel and the same transformation and measurement are done. Thus the whole input frame is scanned and we get a matrix for cosine values. Finally a target is detected in a region of the input frame where it gives the highest cosine value. A detailed computer simulation program is developed for the proposed algorithm and a satisfactory performance is observed when tested with real FLIR images.

  9. The Effectiveness of Blind Source Separation Using Independent Component Analysis for GNSS Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Dong, Danan; Chen, Wen

    2016-04-01

    Due to the development of GNSS technology and the improvement of its positioning accuracy, observational data obtained by GNSS is widely used in Earth space geodesy and geodynamics research. Whereas the GNSS time series data of observation stations contains a plenty of information. This includes geographical space changes, deformation of the Earth, the migration of subsurface material, instantaneous deformation of the Earth, weak deformation and other blind signals. In order to decompose some instantaneous deformation underground, weak deformation and other blind signals hided in GNSS time series, we apply Independent Component Analysis (ICA) to daily station coordinate time series of the Southern California Integrated GPS Network. As ICA is based on the statistical characteristics of the observed signal. It uses non-Gaussian and independence character to process time series to obtain the source signal of the basic geophysical events. In term of the post-processing procedure of precise time-series data by GNSS, this paper examines GNSS time series using the principal component analysis (PCA) module of QOCA and ICA algorithm to separate the source signal. Then we focus on taking into account of these two signal separation technologies, PCA and ICA, for separating original signal that related geophysical disturbance changes from the observed signals. After analyzing these separation process approaches, we demonstrate that in the case of multiple factors, PCA exists ambiguity in the separation of source signals, that is the result related is not clear, and ICA will be better than PCA, which means that dealing with GNSS time series that the combination of signal source is unknown is suitable to use ICA.

  10. A sensitivity analysis on component reliability from fatigue life computations

    NASA Astrophysics Data System (ADS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.; Rudalevige, Trevor

    1992-02-01

    Some uncertainties in determining high component reliability at a specified lifetime from a case study involving the fatigue life of a helicopter component are identified. Reliabilities are computed from results of a simulation process involving an assumed variability (standard deviation) of the load and strength in determining fatigue life. The uncertainties in the high reliability computation are then examined by introducing small changes in the variability for the given load and strength values in the study. Results showed that for a given component lifetime, a small increase in variability of load or strength produced large differences in the component reliability estimates. Among the factors involved in computing fatigue lifetimes, the component reliability estimates were found to be most sensitive to variability in loading. Component fatigue life probability density functions were obtained from the simulation process for various levels of variability. The range of life estimates were very large for relatively small variability in load and strength.

  11. Viscosity of carbon nanotube suspension using artificial neural networks with principal component analysis

    NASA Astrophysics Data System (ADS)

    Yousefi, Fakhri; Karimi, Hajir; Mohammadiyan, Somayeh

    2016-11-01

    This paper applies the model including back-propagation network (BPN) and principal component analysis (PCA) to estimate the effective viscosity of carbon nanotubes suspension. The effective viscosities of multiwall carbon nanotubes suspension are examined as a function of the temperature, nanoparticle volume fraction, effective length of nanoparticle and the viscosity of base fluids using artificial neural network. The obtained results by BPN-PCA model have good agreement with the experimental data.

  12. Homogenization of soil properties map by Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Valverde Arias, Omar; Garrido, Alberto; Villeta, Maria; Tarquis, Ana Maria

    2016-04-01

    It is widely known that extreme climatic phenomena occur with more intensity and frequency. This fact has put more pressure over farming, becoming very important to implement agriculture risk management policies by governments and institutions. One of the main strategies is transfer risk by agriculture insurance. Agriculture insurance based in indexes has gained importance in the last decade. And consist in a comparison between measured index values with a defined threshold that triggers damage losses. However, based index insurance could not be based on an isolated measurement. It is necessary to be integrated in a complete monitoring system that uses many sources of information and tools. For example, index influence areas, crop production risk maps, crop yields, claim statistics, and so on. To establish index influence area is necessary to have a secondary information that show us homogeneous climatic and soil areas, which inside of each homogeneous classes, index measurements on crops of interest are going to be similar, and in this way reduce basis risk. But it is necessary an efficient method to accomplish this aim, to get homogeneous areas that not depends on only in expert criteria and that could be widely used, for this reason this study asses two conventional agricultural and geographic methods (control and climatic maps) based in expert criteria, and one classical statistical method of multi-factorial analysis (factorial map), all of them to homogenize soil and climatic characteristics. Resulting maps were validated by agricultural and spatial analysis, obtaining very good results in statistical method (Factorial map) that proves to be an efficient and accuracy method that could be used for similar porpoises.

  13. A thermodynamically consistent explicit competitive adsorption isotherm model based on second-order single component behaviour.

    PubMed

    Ilić, Milica; Flockerzi, Dietrich; Seidel-Morgenstern, Andreas

    2010-04-01

    A competitive adsorption isotherm model is derived for binary mixtures of components characterized by single component isotherms which are second-order truncations of higher order equilibrium models suggested by multi-layer theory and statistical thermodynamics. The competitive isotherms are determined using the ideal adsorbed solution (IAS) theory which, in case of complex single component isotherms, does not generate explicit expressions to calculated equilibrium loadings and causes time consuming iterations in simulations of adsorption processes. The explicit model derived in this work is based on an analysis of the roots of a cubic polynomial resulting from the set of IAS equations. The suggested thermodynamically consistent and widely applicable competitive isotherm model can be recommended as a flexible tool for efficient simulations of fixed-bed adsorber dynamics.

  14. A robust independent component analysis (ICA) model for functional magnetic resonance imaging (fMRI) data

    NASA Astrophysics Data System (ADS)

    Ao, Jingqi; Mitra, Sunanda; Liu, Zheng; Nutter, Brian

    2011-03-01

    The coupling of carefully designed experiments with proper analysis of functional magnetic resonance imaging (fMRI) data provides us with a powerful as well as noninvasive tool to help us understand cognitive processes associated with specific brain regions and hence could be used to detect abnormalities induced by a diseased state. The hypothesisdriven General Linear Model (GLM) and the data-driven Independent Component Analysis (ICA) model are the two most commonly used models for fMRI data analysis. A hybrid ICA-GLM model combines the two models to take advantages of benefits from both models to achieve more accurate mapping of the stimulus-induced activated brain regions. We propose a modified hybrid ICA-GLM model with probabilistic ICA that includes a noise model. In this modified hybrid model, a probabilistic principle component analysis (PPCA)-based component number estimation is used in the ICA stage to extract the intrinsic number of original time courses. In addition, frequency matching is introduced into the time course selection stage, along with temporal correlation, F-test based model fitting estimation, and time course combination, to produce a more accurate design matrix for GLM. A standard fMRI dataset is used to compare the results of applying GLM and the proposed hybrid ICA-GLM in generating activation maps.

  15. Fatigue detection in strength training using three-dimensional accelerometry and principal component analysis.

    PubMed

    Brown, Niklas; Bichler, Sebastian; Fiedler, Meike; Alt, Wilfried

    2016-06-01

    Detection of neuro-muscular fatigue in strength training is difficult, due to missing criterion measures and the complexity of fatigue. Thus, a variety of methods are used to determine fatigue. The aim of this study was to use a principal component analysis (PCA) on a multifactorial data-set based on kinematic measurements to determine fatigue. Twenty participants (strength training experienced, 60% male) executed 3 sets of 3 exercises with 50 (12 repetitions), 75 (12 repetitions) and 100%-12 RM (RM). Data were collected with a 3D accelerometer and analysed by a newly developed algorithm to evaluate parameters for each repetition. A PCA with six variables was carried out on the results. A fatigue factor was computed based on the loadings on the first component. One-way ANOVA with Bonferroni post hoc analysis was calculated to test for differences between the intensity levels. All six input variables had high loadings on the first component. The ANOVA showed a significant difference between intensities (p < 0.001). Post-hoc analysis revealed a difference between 100% and the lower intensities (p < 0.05) and no difference between 50 and 75%-12RM. Based on these results, it is possible to distinguish between fatigued and non-fatigued sets of strength training. PMID:27111008

  16. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to access the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to...

  17. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to assess the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to ...

  18. Stability analysis for n-component Bose-Einstein condensate

    SciTech Connect

    Roberts, David C.; Ueda, Masahito

    2006-05-15

    We derive the dynamic and thermodynamic stability conditions for dilute multicomponent Bose-Einstein condensates (BECs). These stability conditions, generalized for n-component BECs, are found to be equivalent and are shown to be consistent with the phase diagrams of two- and three-component condensates that are derived from energetic arguments.

  19. A Component Analysis of Positive Behaviour Support Plans

    ERIC Educational Resources Information Center

    McClean, Brian; Grey, Ian

    2012-01-01

    Background: Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Method: Sixty-one staff working with individuals with intellectual disability…

  20. A Component Analysis of Cognitive-Behavioral Treatment for Depression.

    ERIC Educational Resources Information Center

    Jacobson, Neil S.; And Others

    1996-01-01

    Tested Beck's theory explaining efficacy of cognitive- behavioral therapy (CT) for depression. Involved randomly assigning 150 outpatients with major depression to a treatment focused on the behavioral activation (BA) component of CT, a treatment including BA and teaching skills to modify automatic thoughts, but excluding the components of CT…

  1. Transcriptome analysis of all two-component regulatory system mutants of Escherichia coli K-12.

    PubMed

    Oshima, Taku; Aiba, Hirofumi; Masuda, Yasushi; Kanaya, Shigehiko; Sugiura, Masahito; Wanner, Barry L; Mori, Hirotada; Mizuno, Takeshi

    2002-10-01

    We have systematically examined the mRNA profiles of 36 two-component deletion mutants, which include all two-component regulatory systems of Escherichia coli, under a single growth condition. DNA microarray results revealed that the mutants belong to one of three groups based on their gene expression profiles in Luria-Bertani broth under aerobic conditions: (i) those with no or little change; (ii) those with significant changes; and (iii) those with drastic changes. Under these conditions, the anaeroresponsive ArcB/ArcA system, the osmoresponsive EnvZ/OmpR system and the response regulator UvrY showed the most drastic changes. Cellular functions such as flagellar synthesis and expression of the RpoS regulon were affected by multiple two-component systems. A high correlation coefficient of expression profile was found between several two-component mutants. Together, these results support the view that a network of functional interactions, such as cross-regulation, exists between different two-component systems. The compiled data are avail-able at our website (http://ecoli.aist-nara.ac.jp/xp_analysis/ 2_components). PMID:12366850

  2. 78 FR 13895 - Certain Wireless Communications Base Stations and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ... COMMISSION Certain Wireless Communications Base Stations and Components Thereof; Institution of Investigation... wireless communications base stations and components thereof by reason of infringement of U.S. Patent No. 6... communications base stations and components thereof by reason of infringement of one or more of claims 1, 2, 4,...

  3. Variance component estimation for mixed model analysis of cDNA microarray data.

    PubMed

    Sarholz, Barbara; Piepho, Hans-Peter

    2008-12-01

    Microarrays provide a valuable tool for the quantification of gene expression. Usually, however, there is a limited number of replicates leading to unsatisfying variance estimates in a gene-wise mixed model analysis. As thousands of genes are available, it is desirable to combine information across genes. When more than two tissue types or treatments are to be compared it might be advisable to consider the array effect as random. Then information between arrays may be recovered, which can increase accuracy in estimation. We propose a method of variance component estimation across genes for a linear mixed model with two random effects. The method may be extended to models with more than two random effects. We assume that the variance components follow a log-normal distribution. Assuming that the sums of squares from the gene-wise analysis, given the true variance components, follow a scaled chi(2)-distribution, we adopt an empirical Bayes approach. The variance components are estimated by the expectation of their posterior distribution. The new method is evaluated in a simulation study. Differentially expressed genes are more likely to be detected by tests based on these variance estimates than by tests based on gene-wise variance estimates. This effect is most visible in studies with small array numbers. Analyzing a real data set on maize endosperm the method is shown to work well. PMID:19035549

  4. Application of the component paradigm for analysis and design of advanced health system architectures.

    PubMed

    Blobel, B

    2000-12-01

    Based on the component paradigm for software engineering as well as on a consideration of common middleware approaches for health information systems, a generic component model has been developed supporting analysis, design, implementation and harmonisation of such complex systems. Using methods like abstract automatons and the Unified Modelling Language (UML), it could be shown that such components enable the modelling of real-world systems at different levels of abstractions and granularity, so reflecting different views on the same system in a generic and consistent way. Therefore, not only programs and technologies could be modelled, but also business processes, organisational frameworks or security issues as done successfully within the framework of several European projects. PMID:11137472

  5. Inverse spatial principal component analysis for geophysical survey data interpolation

    NASA Astrophysics Data System (ADS)

    Li, Qingmou; Dehler, Sonya A.

    2015-04-01

    The starting point for data processing, visualization, and overlay with other data sources in geological applications often involves building a regular grid by interpolation of geophysical measurements. Typically, the sampling interval along survey lines is much higher than the spacing between survey lines because the geophysical recording system is able to operate with a high sampling rate, while the costs and slower speeds associated with operational platforms limit line spacing. However, currently available interpolating methods often smooth data observed with higher sampling rate along a survey line to accommodate the lower spacing across lines, and much of the higher resolution information is not captured in the interpolation process. In this approach, a method termed as the inverse spatial principal component analysis (isPCA) is developed to address this problem. In the isPCA method, a whole profile observation as well as its line position is handled as an entity and a survey collection of line entities is analyzed for interpolation. To test its performance, the developed isPCA method is used to process a simulated airborne magnetic survey from an existing magnetic grid offshore the Atlantic coast of Canada. The interpolation results using the isPCA method and other methods are compared with the original survey grid. It is demonstrated that the isPCA method outperforms the Inverse Distance Weighting (IDW), Kriging (Geostatistical), and MINimum Curvature (MINC) interpolation methods in retaining detailed anomaly structures and restoring original values. In a second test, a high resolution magnetic survey offshore Cape Breton, Nova Scotia, Canada, was processed and the results are compared with other geological information. This example demonstrates the effective performance of the isPCA method in basin structure identification.

  6. Electromagnetic crystal based terahertz thermal radiators and components

    NASA Astrophysics Data System (ADS)

    Wu, Ziran

    prototyping approach. Third, an all-dielectric THz waveguide is designed, fabricated and characterized. The design is based on hollow-core EMXT waveguide, and the fabrication is implemented with the THz prototyping method. Characterization results of the waveguide power loss factor show good consistency with the simulation, and waveguide propagation loss as low as 0.03 dB/mm at 105 GHz is demonstrated. Several design parameters are also varied and their impacts on the waveguide performance investigated theoretically. Finally, a THz EMXT antenna based on expanding the defect radius of the EMXT waveguide to a horn shape is proposed and studied. The boresight directivity and main beam angular width of the optimized EMXT horn antenna is comparable with a copper horn antenna of the same dimensions at low frequencies, and much better than the copper horn at high frequencies. The EMXT antenna has been successfully fabricated via the same THz prototyping, and we believe this is the first time an EMXT antenna of this architecture is fabricated. Far-field measurement of the EMXT antenna radiation pattern is undergoing. Also, in order to integrate planar THz solid-state devices (especially source and detector) and THz samples under test with the potential THz micro-system fabricate-able by the prototyping approach, an EMXT waveguide-to-microstrip line transition structure is designed. The structure uses tapered solid dielectric waveguides on both ends to transit THz energy from the EMXT waveguide defect onto the microstrip line. Simulation of the transition structure in a back-to-back configuration yields about -15 dB insertion loss mainly due to the dielectric material loss. The coupling and radiation loss of the transition structure is estimated to be -2.115 dB. The fabrication and characterization of the transition system is currently underway. With all the above THz components realized in the future, integrated THz micro-systems manufactured by the same prototyping technique will be

  7. Spatiotemporal analysis of GPS time series in vertical direction using independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Dai, Wujiao; Peng, Wei; Meng, Xiaolin

    2015-11-01

    GPS has been widely used in the field of geodesy and geodynamics thanks to its technology development and the improvement of positioning accuracy. A time series observed by GPS in vertical direction usually contains tectonic signals, non-tectonic signals, residual atmospheric delay, measurement noise, etc. Analyzing these information is the basis of crustal deformation research. Furthermore, analyzing the GPS time series and extracting the non-tectonic information are helpful to study the effect of various geophysical events. Principal component analysis (PCA) is an effective tool for spatiotemporal filtering and GPS time series analysis. But as it is unable to extract statistically independent components, PCA is unfavorable for achieving the implicit information in time series. Independent component analysis (ICA) is a statistical method of blind source separation (BSS) and can separate original signals from mixed observations. In this paper, ICA is used as a spatiotemporal filtering method to analyze the spatial and temporal features of vertical GPS coordinate time series in the UK and Sichuan-Yunnan region in China. Meanwhile, the contributions from atmospheric and soil moisture mass loading are evaluated. The analysis of the relevance between the independent components and mass loading with their spatial distribution shows that the signals extracted by ICA have a strong correlation with the non-tectonic deformation, indicating that ICA has a better performance in spatiotemporal analysis.

  8. Data-Parallel Mesh Connected Components Labeling and Analysis

    SciTech Connect

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  9. Estimation and Psychometric Analysis of Component Profile Scores via Multivariate Generalizability Theory

    ERIC Educational Resources Information Center

    Grochowalski, Joseph H.

    2015-01-01

    Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…

  10. A component analysis of schedule thinning during functional communication training.

    PubMed

    Betz, Alison M; Fisher, Wayne W; Roane, Henry S; Mintz, Joslyn C; Owen, Todd M

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can effectively address this limitation. However, the extent to which each of these components contributes to the effectiveness of the overall approach remains uncertain. In the current investigation, we evaluated the first component by comparing rates of the FCR and problem behavior under mixed and multiple schedules and evaluated the second component by rapidly switching from dense mixed and multiple schedules to lean multiple schedules without gradually thinning the density of reinforcement. Results indicated that multiple schedules decreased the overall rate of reinforcement for the FCR and maintained the strength of the FCR and low rates of problem behavior without gradually thinning the reinforcement schedule.

  11. Joint Procrustes Analysis for Simultaneous Nonsingular Transformation of Component Score and Loading Matrices

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2009-01-01

    In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…

  12. 77 FR 69509 - Combining Modal Responses and Spatial Components in Seismic Response Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-19

    ... COMMISSION Combining Modal Responses and Spatial Components in Seismic Response Analysis AGENCY: Nuclear... Components in Seismic Response Analysis'' as an administratively changed guide in which there are minor... the NRC staff considers acceptable for combining modal responses and spatial components in...

  13. Nonlinear fitness space structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    NASA Astrophysics Data System (ADS)

    Tiilikainen, J.; Tilli, J.-M.; Bosund, V.; Mattila, M.; Hakkarainen, T.; Airaksinen, V.-M.; Lipsanen, H.

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present.

  14. Analysis of elastic micro optical components under large deformation

    NASA Astrophysics Data System (ADS)

    Hoshino, Kazunori; Shimoyama, Isao

    2003-01-01

    We describe a technique for analyzing the mechanical and optical properties of deformable optical elements that combines the finite element method, ray-tracing and birefringence measurement. We fabricated a pneumatically actuated microlens array on an elastic polydimethylsiloxane (PDMS) film to assess the proposed analysis technique. The lenses are 120 mum in diameter and arranged on the top surface of a 200 mum thick base film. The lenses are displaced by pneumatic actuators at the bottom of the film. The measured mechanical-optical properties of the PDMS test materials showed a good match with the calculation. The paths and retardation of light beams transmitted in the microlens array under several actuating conditions were then analyzed. The lens displacement of 21.8 mum was measured at an applied pressure of -45 kPa. At the same pressure, a ray-trace analysis showed that the actuator changed the visual axis of each lens by 5°, while the retardation was estimated to be within the order of 5 × 10-3 nm.

  15. [Tensor Feature Extraction Using Multi-linear Principal Component Analysis for Brain Computer Interface].

    PubMed

    Wang, Jinjia; Yang, Liang

    2015-06-01

    The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competition II dataset 4 and BCI competition N dataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency BEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.

  16. Independent component analysis of noninvasively recorded cortical magnetic DC-fields in humans.

    PubMed

    Wübbeler, G; Ziehe, A; Mackert, B M; Müller, K R; Trahms, L; Curio, G

    2000-05-01

    We apply a recently developed multivariate statistical data analysis technique--so called blind source separation (BSS) by independent component analysis--to process magnetoencephalogram recordings of near-dc fields. The extraction of near-dc fields from MEG recordings has great relevance for medical applications since slowly varying dc-phenomena have been found, e.g., in cerebral anoxia and spreading depression in animals. Comparing several BSS approaches, it turns out that an algorithm based on temporal decorrelation successfully extracted a dc-component which was induced in the auditory cortex by presentation of music. The task is challenging because of the limited amount of available data and the corruption by outliers, which makes it an interesting real-world testbed for studying the robustness of ICA methods.

  17. Analysis of Dynamic Interactions between Different Drivetrain Components with a Detailed Wind Turbine Model

    NASA Astrophysics Data System (ADS)

    Bartschat, A.; Morisse, M.; Mertens, A.; Wenske, J.

    2016-09-01

    The presented work describes a detailed analysis of the dynamic interactions among mechanical and electrical drivetrain components of a modern wind turbine under the influence of parameter variations, different control mechanisms and transient excitations. For this study, a detailed model of a 2MW wind turbine with a gearbox, a permanent magnet synchronous generator and a full power converter has been developed which considers all relevant characteristics of the mechanical and electrical subsystems. This model includes an accurate representation of the aerodynamics and the mechanical properties of the rotor and the complete mechanical drivetrain. Furthermore, a detailed electrical modelling of the generator, the full scale power converter with discrete switching devices, its filters, the transformer and the grid as well as the control structure is considered. The analysis shows that, considering control measures based on active torsional damping, interactions between mechanical and electrical subsystems can significantly affect the loads and thus the individual lifetime of the components.

  18. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  19. Overview of independent component analysis technique with an application to synthetic aperture radar (SAR) imagery processing.

    PubMed

    Fiori, Simone

    2003-01-01

    We present an overview of independent component analysis, an emerging signal processing technique based on neural networks, with the aim to provide an up-to-date survey of the theoretical streams in this discipline and of the current applications in the engineering area. We also focus on a particular application, dealing with a remote sensing technique based on synthetic aperture radar imagery processing: we briefly review the features and main applications of synthetic aperture radar and show how blind signal processing by neural networks may be advantageously employed to enhance the quality of remote sensing data.

  20. Component Architectures and Web-Based Learning Environments

    ERIC Educational Resources Information Center

    Ferdig, Richard E.; Mishra, Punya; Zhao, Yong

    2004-01-01

    The Web has caught the attention of many educators as an efficient communication medium and content delivery system. But we feel there is another aspect of the Web that has not been given the attention it deserves. We call this aspect of the Web its "component architecture." Briefly it means that on the Web one can develop very complex…

  1. Technological Alternatives to Paper-Based Components of Team-Based Learning

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Walker, Joshua D.

    2008-01-01

    The authors have been using components of team-based learning (TBL) in two undergraduate courses at the University of Texas for several years: an educational psychology survey course--Cognition, Human Learning and Motivation--and Introduction to Statistics. In this chapter, they describe how they used technology in classes of fifty to seventy…

  2. Optimized Principal Component Analysis on Coronagraphic Images of the Fomalhaut System

    NASA Astrophysics Data System (ADS)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.; Amara, Adam

    2014-01-01

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases the background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M Jup from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile under program number 087.C-0901(B).

  3. Pheromone component patterns of moth evolution revealed by computer analysis of the Pherolist.

    PubMed

    Byer, John A

    2006-03-01

    1. The Pherolist internet site listing moth sex pheromone components reported in the literature was downloaded and processed by a basic program into a database with 2931 combinations of 377 unique chemical names of sex pheromone attractants used by 1572 moth species in 619 genera and 49 families. Names of pheromone compounds were analysed for aliphatic chain length, unsaturation position, geometric configuration, functional group (aldehyde, alcohol, acetate, epoxide, methyl-branched and hydrocarbon) and number of instances such combinations are used by species and families. 2. The analyses revealed pheromone blends of species ranged from one to eight components (45% species with one component, 36% two, 12% three, 5% four, 1% five, < or = 0.5% for > or = six). The numbers of different components of various chain lengths and functional groups, the numbers of instances such compounds are used by species and the numbers of species using such compounds are presented. 3. The average number of pheromone components per species increased as the number of species in a family increased based on linear regression of components in the 10 largest families, with species numbers ranging from 19 to 461. Pooling the four largest families gave a mean of 1.96 components per species that was significantly greater than the mean of the next 14 smaller families (1.63). Because related species in a large family would need more communication channels, this suggests that these species on average evolved to produce and detect more components in their pheromone blends to achieve a unique communication channel than was needed by species in smaller families. 4. Speciation in moths would entail evolutionary changes in both pheromone biosynthetic and sensory systems that avoided competition for communication channels of existing species. Regression analysis indicated that the more species in a family the more unique pheromone components, but the increase diminishes progressively. This suggests

  4. [Assessment of landscape ecological security and optimization of landscape pattern based on spatial principal component analysis and resistance model in arid inland area: A case study of Ganzhou District, Zhangye City, Northwest China].

    PubMed

    Pan, Jing-hu; Liu, Xiao

    2015-10-01

    Starting from ecological environment of inland river in arid area, the distribution of ecological security pattern of Ganzhou District was obtained by using the theory of landscape ecology, spatial principal component analysis (SPCA) and GIS techniques. Ten factors such as altitude, slope, soil erosion, vegetation coverage, and distance from road, were selected as the constraint conditions. According to the minimum cumulative resistance (MCR) model of landscape, the ecological corridor and nodes were established to optimize the structure and function of ecological function network. The results showed that the comprehensive ecological security situation of the research area was on the average. Area of moderate level of security was 1318.7 km2, being the largest and accounting for 36.7% of the research area. The area of low level of security was mainly located in the northern part and accounted for 19.9% of the study area. With points, lines and surfaces being interlaced, a regional ecological network was constructed, which was consisted of six ecological corridor, 14 ecological nodes, a large ecological source region and a plurality of small area source region, and could effectively improve ecological security level of the study area. PMID:26995922

  5. [Assessment of landscape ecological security and optimization of landscape pattern based on spatial principal component analysis and resistance model in arid inland area: A case study of Ganzhou District, Zhangye City, Northwest China].

    PubMed

    Pan, Jing-hu; Liu, Xiao

    2015-10-01

    Starting from ecological environment of inland river in arid area, the distribution of ecological security pattern of Ganzhou District was obtained by using the theory of landscape ecology, spatial principal component analysis (SPCA) and GIS techniques. Ten factors such as altitude, slope, soil erosion, vegetation coverage, and distance from road, were selected as the constraint conditions. According to the minimum cumulative resistance (MCR) model of landscape, the ecological corridor and nodes were established to optimize the structure and function of ecological function network. The results showed that the comprehensive ecological security situation of the research area was on the average. Area of moderate level of security was 1318.7 km2, being the largest and accounting for 36.7% of the research area. The area of low level of security was mainly located in the northern part and accounted for 19.9% of the study area. With points, lines and surfaces being interlaced, a regional ecological network was constructed, which was consisted of six ecological corridor, 14 ecological nodes, a large ecological source region and a plurality of small area source region, and could effectively improve ecological security level of the study area.

  6. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy.

    PubMed

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-30

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features. PMID:27071414

  7. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  8. Analysis of global components in Ganoderma using liquid chromatography system with multiple columns and detectors.

    PubMed

    Qian, Zhengming; Zhao, Jing; Li, Deqiang; Hu, Dejun; Li, Shaoping

    2012-10-01

    In present study, a multiple columns and detectors liquid chromatography system for analysis of global components in traditional Chinese medicines was developed. The liquid chromatography system was consist of three columns, including size exclusion chromatography column, hydrophilic interaction chromatography column, and reversed phase chromatography column, and three detectors, such as diode array detector, evaporative light scattering detector, and mass spectrometry detector, based on column switching technique. The developed multiple columns and detectors liquid chromatography system was successfully applied to the analysis of global components, including macromolecular (polysaccharides), high (nucleosides and sugars)-, and low (triterpenes)-polarity small molecular compounds in Ganoderma, a well-known Chinese medicinal mushroom. As a result, one macromolecular chromatographic peak was found in two Ganoderma species, 19 components were identified in Ganoderma lucidum (two sugars, three nucleosides, and 14 triterpenes), and four components (two sugars and two nucleosides) were identified in Ganoderma sinense. The developed multiple columns and detectors liquid chromatography system was helpful to understand comprehensive chemical characters in TCMs.

  9. A Content Analysis of Preconception Health Education Materials: Characteristics, Strategies, and Clinical-Behavioral Components

    PubMed Central

    Levis, Denise M.; Westbrook, Kyresa

    2015-01-01

    Purpose Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. Design We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Setting Not applicable. Participants Not applicable. Method Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials’ characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). Results The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). “Practicing PCH benefits the baby’s health” was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Conclusion Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials. PMID:23286661

  10. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  11. Differentially Variable Component Analysis (dVCA): Identifying Multiple Evoked Components using Trial-to-Trial Variability

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.

    2003-01-01

    Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.

  12. ANALYSIS OF KEY MPC COMPONENTS MATERIAL REQUIREMENTS (SCPB: N/A)

    SciTech Connect

    D. Stahl

    1996-03-19

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) in response to a request received via a QAP-3-12 Design Input Data Request from Waste Acceptance, Storage & Transportation (WAST) Design (formerly MRS/MPC Design). The request is to provide: Specific material requirements for the various MPC components (shell, basket, closure lids, shield plug, neutron absorber, and flux traps, if used ). The objective of this analysis is to provide the requested requirements. The purpose of this analysis is to provide a documented record of the basis for the requested requirements. The response is stated in Section 8 herein. The analysis is based upon requirements from an MGDS perspective.

  13. Component analysis of productivity in home care RNs.

    PubMed

    Benefield, L E

    1996-08-01

    The purpose of this study was to develop a productivity measurement method applicable to home health care registered nurses (RNs) by identifying and quantifying the areas of knowledge and ability that define productive nursing practice in home health care. A descriptive, correlational design using qualitative and quantitative methods of data collection and analysis identified 35 knowledge and ability variables that grouped into seven dimensions: Client/Family Management, Practice Management, Knowledge/Skill Maintenance, Communication, Nursing Process, Written Documentation, and Home Health Care Knowledge. There were no significant differences in productivity variables among four major types of agencies. Among agencies considered preeminent, intellectual skills appeared to be of greater importance to productive practice than direct-care skills. The seven productivity dimensions that emerged from this study show promise in providing 1) a theoretical basis for understanding the knowledge and abilities associated with RN productivity in the home health setting, 2) a description of nurse inputs in a home health services productivity model, and 3) a reality-based measurement tool that has utility in managing RN productivity in home health care. PMID:8828384

  14. Partial coverage inspection of corroded engineering components using extreme value analysis

    NASA Astrophysics Data System (ADS)

    Benstock, Daniel; Cegla, Frederic

    2016-02-01

    Ultrasonic thickness C-scans provide information about wall thickness of a component over the entire inspected area. They are performed to determine the condition of a component. However, this is time consuming, expensive and can be unfeasible where access to a component is restricted. The pressure to maximize inspection resources and minimize inspection costs has led to both the development of new sensing technologies and inspection strategies. Partial coverage inspection aims to tackle this challenge by using data from an ultrasonic thickness C-scan of a small fraction of a component's area to extrapolate to the condition of the entire component. Extreme value analysis is a particular tool used in partial coverage inspection. Typical implementations of extreme value analysis partition a thickness map into a number of equally sized blocks and extract the minimum thickness from each block. Extreme value theory provides a limiting form for the probability distribution of this set of minimum thicknesses, from which the parameters of the limiting distribution can be extracted. This distribution provides a statistical model for the minimum thickness in a given area, which can be used for extrapolation. In this paper the basics of extreme value analysis and its assumptions are introduced. We discuss a new method for partitioning a thickness map, based on ensuring that there is evidence that the assumptions of extreme value theory are met by the inspection data. Examples of the implementation of this method are presented on both simulated and experimental data. Further it is shown that realistic predictions can be made from the statistical models developed using this methodology.

  15. Condition Based Monitoring of Gas Turbine Combustion Components

    SciTech Connect

    Ulerich, Nancy; Kidane, Getnet; Spiegelberg, Christine; Tevs, Nikolai

    2012-09-30

    The objective of this program is to develop sensors that allow condition based monitoring of critical combustion parts of gas turbines. Siemens teamed with innovative, small companies that were developing sensor concepts that could monitor wearing and cracking of hot turbine parts. A magnetic crack monitoring sensor concept developed by JENTEK Sensors, Inc. was evaluated in laboratory tests. Designs for engine application were evaluated. The inability to develop a robust lead wire to transmit the signal long distances resulted in a discontinuation of this concept. An optical wear sensor concept proposed by K Sciences GP, LLC was tested in proof-of concept testing. The sensor concept depended, however, on optical fiber tips wearing with the loaded part. The fiber tip wear resulted in too much optical input variability; the sensor could not provide adequate stability for measurement. Siemens developed an alternative optical wear sensor approach that used a commercial PHILTEC, Inc. optical gap sensor with an optical spacer to remove fibers from the wearing surface. The gap sensor measured the length of the wearing spacer to follow loaded part wear. This optical wear sensor was developed to a Technology Readiness Level (TRL) of 5. It was validated in lab tests and installed on a floating transition seal in an F-Class gas turbine. Laboratory tests indicate that the concept can measure wear on loaded parts at temperatures up to 800{degrees}C with uncertainty of < 0.3 mm. Testing in an F-Class engine installation showed that the optical spacer wore with the wearing part. The electro-optics box located outside the engine enclosure survived the engine enclosure environment. The fiber optic cable and the optical spacer, however, both degraded after about 100 operating hours, impacting the signal analysis.

  16. Magnetic unmixing of first-order reversal curve diagrams using principal component analysis

    NASA Astrophysics Data System (ADS)

    Lascu, Ioan; Harrison, Richard J.; Li, Yuting; Muraszko, Joy R.; Channell, James E. T.; Piotrowski, Alexander M.; Hodell, David A.

    2015-09-01

    We describe a quantitative magnetic unmixing method based on principal component analysis (PCA) of first-order reversal curve (FORC) diagrams. For PCA, we resample FORC distributions on grids that capture diagnostic signatures of single-domain (SD), pseudosingle-domain (PSD), and multidomain (MD) magnetite, as well as of minerals such as hematite. Individual FORC diagrams are recast as linear combinations of end-member (EM) FORC diagrams, located at user-defined positions in PCA space. The EM selection is guided by constraints derived from physical modeling and imposed by data scatter. We investigate temporal variations of two EMs in bulk North Atlantic sediment cores collected from the Rockall Trough and the Iberian Continental Margin. Sediments from each site contain a mixture of magnetosomes and granulometrically distinct detrital magnetite. We also quantify the spatial variation of three EM components (a coarse silt-sized MD component, a fine silt-sized PSD component, and a mixed clay-sized component containing both SD magnetite and hematite) in surficial sediments along the flow path of the North Atlantic Deep Water (NADW). These samples were separated into granulometric fractions, which helped constrain EM definition. PCA-based unmixing reveals systematic variations in EM relative abundance as a function of distance along NADW flow. Finally, we apply PCA to the combined data set of Rockall Trough and NADW sediments, which can be recast as a four-EM mixture, providing enhanced discrimination between components. Our method forms the foundation of a general solution to the problem of unmixing multicomponent magnetic mixtures, a fundamental task of rock magnetic studies.

  17. A Component-based Programming Model for Composite, Distributed Applications

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  18. Controllable-stiffness components based on magnetorheological elastomers

    NASA Astrophysics Data System (ADS)

    Ginder, John M.; Nichols, Mark E.; Elie, Larry D.; Clark, Seamus M.

    2000-06-01

    So-called magnetorheological (MR) elastomers, comprising rubbery polymers loaded with magnetizable particles that are aligned in a magnetic field, possess dynamic stiffness and damping that can subsequently be controlled by applied fields. Tunable automotive bushings and mounts incorporating these materials and an embedded magnetic field source have been constructed. In this article, the response of these components to dynamic mechanical loading is described. They behave essentially as elastomeric springs with stiffness and damping that is increased by tens of percent with an applied electrical current. Their time of response to a change in current is less than ten milliseconds. In addition to a tunable spring or force generator, these components may also serve as deflection sensors.

  19. Respiratory dose analysis for components of ambient particulate matter

    EPA Science Inventory

    Particulate matter (PM) in the atmosphere is a complex mixture of particles with different sizes and chemical compositions. Although PM is known to induce health effects, specific attributes of PM that may cause health effects are somewhat ambiguous. Dose of each specific compone...

  20. A Component Analysis of Schedule Thinning during Functional Communication Training

    ERIC Educational Resources Information Center

    Betz, Alison M.; Fisher, Wayne W.; Roane, Henry S.; Mintz, Joslyn C.; Owen, Todd M.

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can…

  1. [Experimental Conditions and Reliability Analysis of Results of COD components].

    PubMed

    Li, Zhi-hua; Zhang, Yin; Han, Xing; Yu, Ke; Li, Ru-jia

    2015-10-01

    The present study attempts to use SF( OUR(max)/OUR(en)) instead of S(0)/X(0) as an index of optimal initial conditions for determination of COD components by means of respirometry, thereby simplifying the measuring process and the operation can be automated. Further, the ratio of COD consumed by the growth of biomass can be used for the reliability assessment of results. Experimental results show that, experimental conditions for obtaining good results as follows: (1) for samples that composed of a large amount of easily biodegradable components (e. g., synthetic wastewater made by sodium acetate), SF should be in the range of 2.8 to 5.3, and the ratio of COD consumed by growth of biomass should be less than 30%; (2) for samples that composed of both readily biodegradable and slowly biodegradable components (i. e., typical domestic wastewater), SF should be in the range of 5.8 to 6.4, and the ratio of COD consumed by growth of biomass should be less than 30%; (3) and for samples that composed of a large amount of slowly biodegradable industrial wastewater (i. e., landfill leachate), SF should be 15 or less, and the ratio of COD consumed by growth of biomass should be approximately 40%. Therefore, when respirometry is used for the determination of COD components, the optimal conditions in terms of SF increase with the complexity of carbon source.

  2. A Critical Analysis of Football Bowl Subdivision Coaching Contract Components

    ERIC Educational Resources Information Center

    Nichols, Justin Keith

    2012-01-01

    This exploratory study is designed to inventory and analyze contract components used by Football Bowl Subdivision (FBS) institutions in the National Collegiate Athletic Association (NCAA) to further contribute to the body research. The FBS is comprised of 120 institutions and 94 of those institutions submitted contracts to "USA Today"…

  3. An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Fava, Joseph L.

    1987-01-01

    Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…

  4. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  5. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    PubMed

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  6. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  7. Detecting Genomic Signatures of Natural Selection with Principal Component Analysis: Application to the 1000 Genomes Data

    PubMed Central

    Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G.B.

    2016-01-01

    To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. PMID:26715629

  8. Electrocardiogram beat detection enhancement using independent component analysis.

    PubMed

    Kuzilek, Jakub; Lhotska, Lenka

    2013-06-01

    Beat detection is a basic and fundamental step in electrocardiogram (ECG) processing. In many ECG applications strong artifacts from biological or technical sources could appear and cause distortion of ECG signals. Beat detection algorithm desired property is to avoid these distortions and detect beats in any situation. Our developed method is an extension of Christov's beat detection algorithm, which detects beat using combined adaptive threshold on transformed ECG signal (complex lead). Our offline extension adds estimation of independent components of measured signal into the transformation of ECG creating a signal called complex component, which enhances ECG activity and enables beat detection in presence of strong noises. This makes the beat detection algorithm much more robust in cases of unpredictable noise appearances, typical for holter ECGs and telemedicine applications of ECG. We compared our algorithm with the performance of our implementation of the Christov's and Hamilton's beat detection algorithm.

  9. Analysis of electronic component failures using high-density radiography

    SciTech Connect

    Tuohig, W.D.; Potter, T.J.

    1991-11-01

    The exceptional resolution and nondestructive nature of microfocus radiography has proven to be extremely useful in the diagnosis of electronic component failures, particularly when the components are contained in sealed or encapsulated assemblies. An epoxy-encapsulated NTC thermistor and an epitaxial silicon P-N junction photodetector are examples of discrete devices in which the cause of failure was correctly hypothesized directly from a radiographic image. Subsequent destructive physical examinations confirmed the initial hypothesis and established the underlying cause in each case. The problem in a vacuum switch tube which failed to function was apparent in the radiographic image, but the underlying cause was not clear. However, radiography also showed that the position of a flat cable in the assembly could contribute to failure, an observation which resulted in a change in manufacturing procedure. In each of these instances, microradiography played a key role in decisions concerning the root cause of failure, product viability, and corrective action. 15 refs., 10 figs.

  10. Analysis of the hadron component in E.A.S.

    NASA Technical Reports Server (NTRS)

    Procureur, J.; Stamenov, J. N.; Stavrev, P. V.; Ushev, S. Z.

    1985-01-01

    Hadrons in extensive air showers (E.A.S.) provide direct information about high energy interactions. As a rule the biases pertaining to different shower array arrangements have a relative large influence for the basic phenomenological characteristics of the E.A.S. hadron component. In this situation, the problem of the correct comparison between model calculated and experimental characteristics is of great importance for the reliability of the derived conclusions about the high energy interaction characteristics.

  11. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  12. Correlation map analysis between appearances of Japanese facial images and amount of melanin and hemoglobin components in the skin

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Uetsuki, Keiji; Ojima, Nobutoshi; Miyake, Yoichi

    2001-06-01

    Skin color reproduction becomes increasingly important with the recent progress in various imaging systems. In this paper, based on subjective experiments, correlation maps are analyzed between appearance of Japanese facial images and amount of melanin and hemoglobin components in the facial skin. Facial color images were taken by digital still camera. The spatial distributions of melanin and hemoglobin components in the facial color image were separated by independent component analysis of skin colors. The separated components were synthesized to simulate the various facial color images by changing the quantities of the two separated pigments. The synthesized images were evaluated subjectively by comparing with the original facial images. From the analysis of correlation map, we could find the visual or psychological terms that are well related to melanin components influence the appearance of facial color image.

  13. Analysis of complex elastic structures by a Rayleigh-Ritz component modes method using Lagrange multipliers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Klein, L. R.

    1974-01-01

    The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.

  14. Structural analysis of ultra-high speed aircraft structural components

    NASA Technical Reports Server (NTRS)

    Lenzen, K. H.; Siegel, W. H.

    1977-01-01

    The buckling characteristics of a hypersonic beaded skin panel were investigated under pure compression with boundary conditions similar to those found in a wing mounted condition. The primary phases of analysis reported include: (1) experimental testing of the panel to failure; (2) finite element structural analysis of the beaded panel with the computer program NASTRAN; and (3) summary of the semiclassical buckling equations for the beaded panel under purely compressive loads. A comparison of each of the analysis methods is also included.

  15. Diabetic retinopathy: a quadtree based blood vessel detection algorithm using RGB components in fundus images.

    PubMed

    Reza, Ahmed Wasif; Eswaran, C; Hati, Subhas

    2008-04-01

    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.

  16. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (P...

  17. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  18. Bioelectric signal classification using a recurrent probabilistic neural network with time-series discriminant component analysis.

    PubMed

    Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio

    2013-01-01

    This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.

  19. RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.

    PubMed

    Dang, Chinh; Radha, Hayder

    2015-11-01

    Key frame extraction algorithms consider the problem of selecting a subset of the most informative frames from a video to summarize its content. Several applications, such as video summarization, search, indexing, and prints from video, can benefit from extracted key frames of the video under consideration. Most approaches in this class of algorithms work directly with the input video data set, without considering the underlying low-rank structure of the data set. Other algorithms exploit the low-rank component only, ignoring the other key information in the video. In this paper, a novel key frame extraction framework based on robust principal component analysis (RPCA) is proposed. Furthermore, we target the challenging application of extracting key frames from unstructured consumer videos. The proposed framework is motivated by the observation that the RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each element in the same data set. The two information types are combined into a single l1-norm-based non-convex optimization problem to extract the desired number of key frames. Moreover, we develop a novel iterative algorithm to solve this optimization problem. The proposed RPCA-based framework does not require shot(s) detection, segmentation, or semantic understanding of the underlying video. Finally, experiments are performed on a variety of consumer and other types of videos. A comparison of the results obtained by our method with the ground truth and with related state-of-the-art algorithms clearly illustrates the viability of the proposed RPCA-based framework.

  20. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…