Science.gov

Sample records for component analysis based

  1. CO component estimation based on the independent component analysis

    SciTech Connect

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  2. Enhancement of textural differences based on morphological component analysis.

    PubMed

    Chi, Jianning; Eramian, Mark

    2015-09-01

    This paper proposes a new texture enhancement method which uses an image decomposition that allows different visual characteristics of textures to be represented by separate components in contrast with previous methods which either enhance texture indirectly or represent all texture information using a single image component. Our method is intended to be used as a preprocessing step prior to the use of texture-based image segmentation algorithms. Our method uses a modification of morphological component analysis (MCA) which allows texture to be separated into multiple morphological components each representing a different visual characteristic of texture. We select four such texture characteristics and propose new dictionaries to extract these components using MCA. We then propose procedures for modifying each texture component and recombining them to produce a texture-enhanced image. We applied our method as a preprocessing step prior to a number of texture-based segmentation methods and compared the accuracy of the results, finding that our method produced results superior to comparator methods for all segmentation algorithms tested. We also demonstrate by example the main mechanism by which our method produces superior results, namely that it causes the clusters of local texture features of each distinct image texture to mutually diverge within the multidimensional feature space to a vastly superior degree versus the comparator enhancement methods. PMID:25935032

  3. Random phase-shifting interferometry based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Lu, Xiaoxu; Tian, Jindong; Shou, Junwei; Zheng, Dejin; Zhong, Liyun

    2016-07-01

    In random phase-shifting interferometry, a novel phase retrieval algorithm is proposed based on the independent component analysis (ICA). By performing the recombination of pixel position, a sequence of phase-shifting interferograms with random phase shifts are decomposed into a group of mutual independent components, and then the background and the measured phase of interferogram can obtained with a simple arctangent operation. Compared with the conventional advanced iterative algorithm (AIA) with high accuracy, both the simulation and the experimental results demonstrate that the proposed ICA algorithm reveals high accuracy, rapid convergence, and good noise-tolerance in random phase-shifting interferometry.

  4. Filterbank-based independent component analysis for acoustic mixtures

    NASA Astrophysics Data System (ADS)

    Park, Hyung-Min

    2011-06-01

    Independent component analysis (ICA) for acoustic mixtures has been a challenging problem due to very complex reverberation involved in real-world mixing environments. In an effort to overcome disadvantages of the conventional time domain and frequency domain approaches, this paper describes filterbank-based independent component analysis for acoustic mixtures. In this approach, input signals are split into subband signals and decimated. A simplified network performs ICA on the decimated signals, and finally independent components are synthesized. First, a uniform filterbank is employed in the approach for basic and simple derivation and implementation. The uniform-filterbank-based approach achieves better separation performance than the frequency domain approach and gives faster convergence speed with less computational complexity than the time domain approach. Since most of natural signals have exponentially or more steeply decreasing energy as the frequency increases, the spectral characteristics of natural signals introduce a Bark-scale filterbank which divides low frequency region minutely and high frequency region widely. The Bark-scale-filterbank-based approach shows faster convergence speed than the uniform-filterbank-based one because it has more whitened inputs in low frequency subbands. It also improves separation performance as it has enough data to train adaptive parameters exactly in high frequency subbands.

  5. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  6. Biological agent detection based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Mudigonda, Naga R.; Kacelenga, Ray

    2006-05-01

    This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.

  7. Life Assessment of Steam Turbine Components Based on Viscoplastic Analysis

    NASA Astrophysics Data System (ADS)

    Choi, Woo-Sung; Fleury, Eric; Kim, Bum-Shin; Hyun, Jung-Seob

    Unsteady thermal and mechanical loading in turbine components is caused due to the transient regimes arising during start-ups and shut-downs and due to changes in the operating regime in steam power plants; this results in nonuniform strain and stress distribution. Thus, an accurate knowledge of the stresses caused by various loading conditions is required to ensure the integrity and to ensure an accurate life assessment of the components of a turbine. Although the materials of the components of the steam turbine deform inelastically at a high temperature, currently, only elastic calculations are performed for safety and simplicity. Numerous models have been proposed to describe the viscoplastic (time-dependent) behavior; these models are rather elaborate and it is difficult to incorporate them into a finite element code in order to simulate the loading of complex structures. In this paper, the total lifetime of the components of a steam turbine was calculated by combining the viscoplastic constitutive equation with the ABAQUS finite element code. Viscoplastic analysis was conducted by focusing mainly on simplified constitutive equations with linear kinematic hardening, which is simple enough to be used effectively in computer simulation. The von Mises stress distribution of an HIP turbine rotor was calculated during the cold start-up operation of the rotor, and a reasonable number of cycles were obtained from the equation of Langer.

  8. Principal component analysis based methodology to distinguish protein SERS spectra

    NASA Astrophysics Data System (ADS)

    Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.

    2011-05-01

    Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.

  9. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  10. Experimental analysis of Model-Based Roentgen Stereophotogrammetric Analysis (MBRSA) on four typical prosthesis components.

    PubMed

    Seehaus, Frank; Emmerich, Judith; Kaptein, Bart L; Windhagen, Henning; Hurschler, Christof

    2009-04-01

    Classical marker-based roentgen stereophotogrammetric analysis (RSA) is an accurate method of measuring in vivo implant migration. A disadvantage of the method is the necessity of placing tantalum markers on the implant, which constitutes additional manufacturing and certification effort. Model-based RSA (MBRSA) is a method by which pose-estimation of geometric surface-models of the implant is used to detect implant migration. The placement of prosthesis markers is thus no longer necessary. The accuracy of the pose-estimation algorithms used depends on the geometry of the prosthesis as well as the accuracy of the surface models used. The goal of this study was thus to evaluate the experimental accuracy and precision of the MBRSA method for four different, but typical prosthesis geometries, that are commonly implanted. Is there a relationship existing between the accuracy of MBRSA and prosthesis geometries? Four different prosthesis geometries were investigated: one femoral and one tibial total knee arthroplasty (TKA) component and two different femoral stem total hip arthroplasty (THA) components. An experimental phantom model was used to simulate two different implant migration protocols, whereby the implant was moved relative to the surrounding bone (relative prosthesis-bone motion (RM)), or, similar to the double-repeated measures performed to assess accuracy clinically, both the prosthesis and the surrounding bone model (zero relative prosthesis-bone motion (ZRM)) were moved. Motions were performed about three translational and three rotational axes, respectively. The maximum 95% confidence interval (CI) for MBRSA of all four prosthesis investigated was better than -0.034 to 0.107 mm for in-plane and -0.217 to 0.069 mm for out-of-plane translation, and from -0.038 deg to 0.162 deg for in-plane and from -1.316 deg to 0.071 deg for out-of-plane rotation, with no clear differences between the ZRM and RM protocols observed. Accuracy in translation was similar

  11. Gabor feature-based apple quality inspection using kernel principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Automated inspection of apple quality involves computer recognition of good apples and blemished apples based on geometric or statistical features derived from apple images. This paper introduces a Gabor feature-based kernel, principal component analysis (PCA) method; by combining Gabor wavelet rep...

  12. Robust Adaptive Principal Component Analysis Based on Intergraph Matrix for Medical Image Registration

    PubMed Central

    Xiao, Jinjun; Li, Min; Zhang, Haipeng

    2015-01-01

    This paper proposes a novel robust adaptive principal component analysis (RAPCA) method based on intergraph matrix for image registration in order to improve robustness and real-time performance. The contributions can be divided into three parts. Firstly, a novel RAPCA method is developed to capture the common structure patterns based on intergraph matrix of the objects. Secondly, the robust similarity measure is proposed based on adaptive principal component. Finally, the robust registration algorithm is derived based on the RAPCA. The experimental results show that the proposed method is very effective in capturing the common structure patterns for image registration on real-world images. PMID:25960739

  13. Reduction of a collisional-radiative mechanism for argon plasma based on principal component analysis

    SciTech Connect

    Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.

    2015-06-15

    This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.

  14. Impact factor analysis of mixture spectra unmixing based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Shengbo; Guo, Xulin; Zhou, Chao

    2016-01-01

    Based on spectral independence of different materials, independent component analysis (ICA), a blind source separation technique, can be applied to separate mixed hyperspectral signals. For the purpose of detecting objects on the sea and improving the precision of target recognition, an original ICA method is applied by analyzing the influence exerted by spectral features of different materials and mixture materials on spectral unmixing results. Due to the complexity of targets on the sea, several measured spectra of different materials have been mixed with water spectra to simulate mixed spectra for mixture spectra decomposition. Synthetic mixed spectra are generated by linear combinations of different materials and water spectra to obtain separated results. We then compared the separated results with the measured spectra of each endmember by coefficient of determination. We conclude that these factors that will change the original spectral characteristics of Gaussian distribution have significant influence on the separated results and selecting a proper initial matrix, and processing spectral data with lower noise can help improve the ICA method for more accurate separated results from hyperspectral data.

  15. Dependent component analysis based approach to robust demarcation of skin tumors

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2009-02-01

    Method for robust demarcation of the basal cell carcinoma (BCC) is presented employing novel dependent component analysis (DCA)-based approach to unsupervised segmentation of the red-green-blue (RGB) fluorescent image of the BCC. It exploits spectral diversity between the BCC and the surrounding tissue. DCA represents an extension of the independent component analysis (ICA) and is necessary to account for statistical dependence induced by spectral similarity between the BCC and surrounding tissue. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization and ICA we experimentally demonstrate good performance of DCA-based BCC demarcation in demanding scenario where intensity of the fluorescent image has been varied almost two-orders of magnitude.

  16. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  17. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. PMID:26917856

  18. Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study

    NASA Astrophysics Data System (ADS)

    Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas

    2009-02-01

    We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.

  19. Image-based pupil plane characterization via principal component analysis for EUVL tools

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-03-01

    We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.

  20. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  1. Principal component analysis based carrier removal approach for Fourier transform profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao

    2015-05-01

    To handle the issue of the nonlinear carrier phase due to the divergent illumination commonly adopted in the fringe projection measurement, we propose a principal component analysis (PCA) based carrier removal method for Fourier transform profilometry. By PCA, the method can decompose the nonlinear carrier phase map into several principal components, where the phase of the carrier can be extracted from the first dominant component acquired. It is effective and requires less human intervention since no data points need to be collected from the reference plane in advance compared with traditional methods. Further, the influence of the lens distortion is considered thus the carrier can be determined more accurately. Our experiment shows the validity of the proposed approach.

  2. Extracting the core indicators of pulverized coal for blast furnace injection based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Hong-wei; Su, Bu-xin; Zhang, Jian-liang; Zhu, Meng-yi; Chang, Jian

    2013-03-01

    An updated approach to refining the core indicators of pulverized coal used for blast furnace injection based on principal component analysis is proposed in view of the disadvantages of the existing performance indicator system of pulverized coal used in blast furnaces. This presented method takes into account all the performance indicators of pulverized coal injection, including calorific value, igniting point, combustibility, reactivity, flowability, grindability, etc. Four core indicators of pulverized coal injection are selected and studied by using principal component analysis, namely, comprehensive combustibility, comprehensive reactivity, comprehensive flowability, and comprehensive grindability. The newly established core index system is not only beneficial to narrowing down current evaluation indices but also effective to avoid previous overlapping problems among indicators by mutually independent index design. Furthermore, a comprehensive property indicator is introduced on the basis of the four core indicators, and the injection properties of pulverized coal can be overall evaluated.

  3. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  4. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  5. Inverting geodetic time series with a principal component analysis-based inversion method

    NASA Astrophysics Data System (ADS)

    Kositsky, A. P.; Avouac, J.-P.

    2010-03-01

    The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.

  6. Modeling PCB dechlorination in aquatic sediments by principal component based factor analysis and positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Christensen, E. R.; Bzdusek, P. A.

    2003-04-01

    Anaerobic PCB dechlorination in aquatic sediments is a naturally occurring process that reduces the dioxin-like PCB toxicity. The PCB biphenyl structure is kept intact but the number of substituted chlorine atoms is reduced, primarily from the para and meta positions. Flanked para and meta chlorine dechlorination, as in process H/H', appears to be more common in-situ than flanked and unflanked para, and meta dechlorination as in process Q. Aroclors that are susceptible to these reactions include 1242, 1248, 1254, and 1260. These dechlorination reactions have recently been modeled by a least squares method for Ashtabula River, Ohio, and Fox River, Wisconsin sediments. Prior to modeling the dechlorination reactions for an ecosystem it is desirable to generate overall PCB source functions. One method to determine source functions is to use loading matrices of a factor analytical model. We have developed such models based both on a principal component approach including nonnegative oblique rotations, and positive matrix factorization (PMF). While the principal component method first requires an eigenvalue analysis of a covariance matrix, the PMF method is based on a direct least squares analysis considering simultaneously the loading and score matrices. Loading matrices obtained from the PMF method are somewhat sensitive to the initial guess of source functions. Preliminary work indicates that a hybrid approach considering first principal components and then PMF may offer an optimum solution. The relationship of PMF to conventional chemical mass balance modeling with or without some prior knowledge of source functions is also discussed.

  7. A novel concealed information test method based on independent component analysis and support vector machine.

    PubMed

    Gao, Junfeng; Lu, Liang; Yang, Yong; Yu, Gang; Na, Liantao; Rao, NiNi

    2012-01-01

    The concealed information test (CIT) has drawn much attention and has been widely investigated in recent years. In this study, a novel CIT method based on denoised P3 and machine learning was proposed to improve the accuracy of lie detection. Thirty participants were chosen as the guilty and innocent participants to perform the paradigms of 3 types of stimuli. The electroencephalogram (EEG) signals were recorded and separated into many single trials. In order to enhance the signal noise ratio (SNR) of P3 components, the independent component analysis (ICA) method was adopted to separate non-P3 components (i.e., artifacts) from every single trial. In order to automatically identify the P3 independent components (ICs), a new method based on topography template was proposed to automatically identify the P3 ICs. Then the P3 waveforms with high SNR were reconstructed on Pz electrodes. Second, the 3 groups of features based on time,frequency, and wavelets were extracted from the reconstructed P3 waveforms. Finally, 2 classes of feature samples were used to train a support vector machine (SVM) classifier because it has higher performance compared with several other classifiers. Meanwhile, the optimal number of P3 ICs and some other parameter values in the classifiers were determined by the cross-validation procedures. The presented method achieved a balance test accuracy of 84.29% on detecting P3 components for the guilty and innocent participants. The presented method improves the efficiency of CIT in comparison with previous reported methods. PMID:22423552

  8. Blind spectral unmixing based on sparse component analysis for hyperspectral remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Wang, Xinyu; Zhao, Lin; Feng, Ruyi; Zhang, Liangpei; Xu, Yanyan

    2016-09-01

    Recently, many blind source separation (BSS)-based techniques have been applied to hyperspectral unmixing. In this paper, a new blind spectral unmixing method based on sparse component analysis (BSUSCA) is proposed to solve the problem of highly mixed data. The BSUSCA algorithm consists of an alternative scheme based on two-block alternating optimization, by which we can simultaneously obtain the endmember signatures and their corresponding fractional abundances. According to the spatial distribution of the endmembers, the sparse properties of the fractional abundances are considered in the proposed algorithm. A sparse component analysis (SCA)-based mixing matrix estimation method is applied to update the endmember signatures, and the abundance estimation problem is solved by the alternating direction method of multipliers (ADMM). SCA is utilized for the unmixing due to its various advantages, including the unique solution and robust modeling assumption. The robustness of the proposed algorithm is verified through simulated experimental study. The experimental results using both simulated data and real hyperspectral remote sensing images confirm the high efficiency and precision of the proposed algorithm.

  9. [Component analysis of complex mixed solution based on multidimensional diffuse reflectance spectroscopy].

    PubMed

    Li, Gang; Xiong, Chan; Zhao, Li-ying; Lin, Ling; Tong, Ying; Zhang, Bao-ju

    2012-02-01

    In the present paper, the authors proposed a method for component analysis of complex mixed solutions based on multidimensional diffuse reflectance spectroscopy by analyzing the information carried by spectrum signals from various optical properties of various components of the analyte. The experiment instrument was designed with supercontinuum laser source, the motorized precision translation stage and the spectrometer. The Intralipid-20% was taken as an analyte, and was diluted over a range of 1%-20% in distilled water. The diffuse reflectance spectrum signal was measured at 24 points within the distance of 1.5-13 mm (at an interval of 0.5 mm) above the incidence point. The partial least squares algorithm model was used to perform a modeling and forecasting analysis for the spectral analysis data collected from single-point and multi-point. The results showed that the most accurate calibration model was created by the spectral data acquired from the nearest 1-13 points above the incident point; the most accurate prediction model was created by the spectral signal acquired from the nearest 1-7 points above the incident point. It was proved that multidimensional diffuse reflectance spectroscopy can improve the spectral signal to noise ratio. Compared with the traditional spectrum technology using a single optical property such as absorbance or reflectance, this method increased the impact of scattering characteristics of the analyte. So the use of a variety of optical properties of the analytes can make an improvement of the accuracy of the modeling and forecasting, and also provide a basis for component analysis of the complex mixed solution based on multidimensional diffuse reflectance spectroscopy. PMID:22512196

  10. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  11. Crawling Waves Speed Estimation Based on the Dominant Component Analysis Paradigm.

    PubMed

    Rojas, Renán; Ormachea, Juvenal; Salo, Arthur; Rodríguez, Paul; Parker, Kevin J; Castaneda, Benjamin

    2015-10-01

    A novel method for estimating the shear wave speed from crawling waves based on the amplitude modulation-frequency modulation model is proposed. Our method consists of a two-step approach for estimating the stiffness parameter at the central region of the material of interest. First, narrowband signals are isolated in the time dimension to recover the locally strongest component and to reject distortions from the ultrasound data. Then, the shear wave speed is computed by the dominant component analysis approach and its spatial instantaneous frequency is estimated by the discrete quasi-eigenfunction approximations method. Experimental results on phantoms with different compositions and operating frequencies show coherent speed estimations and accurate inclusion locations. PMID:25628096

  12. Analysis of active components in Salvia miltiorrhiza injection based on vascular endothelial cell protection.

    PubMed

    Shen, Jie; Yang, Kai; Sun, Caihua; Zheng, Minxia

    2014-09-01

    Correlation analysis based on chromatograms and pharmacological activities is essential for understanding the effective components in complex herbal medicines. In this report, HPLC and measurement of antioxidant properties were used to describe the active ingredients of Salvia miltiorrhiza injection (SMI). HPLC results showed that tanshinol, protocatechuic aldehyde, rosmarinic acid, salvianolic acid B, protocatechuic acid and their metabolites in rat serum may contribute to the efficacy of SMI. Assessment of antioxidant properties indicated that differences in the composition of serum powder of SMI caused differences in vascular endothelial cell protection. When bivariate correlation was carried out it was found that salvianolic acid B, tanshinol and protocatechuic aldehyde were active components of SMI because they were correlated to antioxidant properties. PMID:25296678

  13. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  14. Functional activity maps based on significance measures and Independent Component Analysis.

    PubMed

    Martínez-Murcia, F J; Górriz, J M; Ramírez, J; Puntonet, C G; Illán, I A

    2013-07-01

    The use of functional imaging has been proven very helpful for the process of diagnosis of neurodegenerative diseases, such as Alzheimer's Disease (AD). In many cases, the analysis of these images is performed by manual reorientation and visual interpretation. Therefore, new statistical techniques to perform a more quantitative analysis are needed. In this work, a new statistical approximation to the analysis of functional images, based on significance measures and Independent Component Analysis (ICA) is presented. After the images preprocessing, voxels that allow better separation of the two classes are extracted, using significance measures such as the Mann-Whitney-Wilcoxon U-Test (MWW) and Relative Entropy (RE). After this feature selection step, the voxels vector is modelled by means of ICA, extracting a few independent components which will be used as an input to the classifier. Naive Bayes and Support Vector Machine (SVM) classifiers are used in this work. The proposed system has been applied to two different databases. A 96-subjects Single Photon Emission Computed Tomography (SPECT) database from the "Virgen de las Nieves" Hospital in Granada, Spain, and a 196-subjects Positron Emission Tomography (PET) database from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Values of accuracy up to 96.9% and 91.3% for SPECT and PET databases are achieved by the proposed system, which has yielded many benefits over methods proposed on recent works. PMID:23660005

  15. Cardiac autonomic changes in middle-aged women: identification based on principal component analysis.

    PubMed

    Trevizani, Gabriela A; Nasario-Junior, Olivassé; Benchimol-Barbosa, Paulo R; Silva, Lilian P; Nadal, Jurandir

    2016-07-01

    The purpose of this study was to investigate the application of the principal component analysis (PCA) technique on power spectral density function (PSD) of consecutive normal RR intervals (iRR) aiming at assessing its ability to discriminate healthy women according to age groups: young group (20-25 year-old) and middle-aged group (40-60 year-old). Thirty healthy and non-smoking female volunteers were investigated (13 young [mean ± SD (median): 22·8 ± 0·9 years (23·0)] and 17 Middle-aged [51·7 ± 5·3 years (50·0)]). The iRR sequence was collected during ten minutes, breathing spontaneously, in supine position and in the morning, using a heart rate monitor. After selecting an iRR segment (5 min) with the smallest variance, an auto regressive model was used to estimate the PSD. Five principal component coefficients, extracted from PSD signals, were retained for analysis according to the Mahalanobis distance classifier. A threshold established by logistic regression allowed the separation of the groups with 100% specificity, 83·2% sensitivity and 93·3% total accuracy. The PCA appropriately classified two groups of women in relation to age (young and Middle-aged) based on PSD analysis of consecutive normal RR intervals. PMID:25532598

  16. A Conditional Entropy-Based Independent Component Analysis for Applications in Human Detection and Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Siana, Linda; Shou, Yu-Wen; Shen, Tzu-Kuei

    2010-12-01

    We present in this paper a modified independent component analysis (mICA) based on the conditional entropy to discriminate unsorted independent components. We make use of the conditional entropy to select an appropriate subset of the ICA features with superior capability in classification and apply support vector machine (SVM) to recognizing patterns of human and nonhuman. Moreover, we use the models of background images based on Gaussian mixture model (GMM) to handle images with complicated backgrounds. Also, the color-based shadow elimination and head models in ellipse shapes are combined to improve the performance of moving objects extraction and recognition in our system. Our proposed tracking mechanism monitors the movement of humans, animals, or vehicles within a surveillance area and keeps tracking the moving pedestrians by using the color information in HSV domain. Our tracking mechanism uses the Kalman filter to predict locations of moving objects for the conditions in lack of color information of detected objects. Finally, our experimental results show that our proposed approach can perform well for real-time applications in both indoor and outdoor environments.

  17. Forecasting of Air Quality Index in Delhi Using Neural Network Based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Kumar, Anikender; Goyal, P.

    2013-04-01

    Forecasting of the air quality index (AQI) is one of the topics of air quality research today as it is useful to assess the effects of air pollutants on human health in urban areas. It has been learned in the last decade that airborne pollution has been a serious and will be a major problem in Delhi in the next few years. The air quality index is a number, based on the comprehensive effect of concentrations of major air pollutants, used by Government agencies to characterize the quality of the air at different locations, which is also used for local and regional air quality management in many metro cities of the world. Thus, the main objective of the present study is to forecast the daily AQI through a neural network based on principal component analysis (PCA). The AQI of criteria air pollutants has been forecasted using the previous day's AQI and meteorological variables, which have been found to be nearly same for weekends and weekdays. The principal components of a neural network based on PCA (PCA-neural network) have been computed using a correlation matrix of input data. The evaluation of the PCA-neural network model has been made by comparing its results with the results of the neural network and observed values during 2000-2006 in four different seasons through statistical parameters, which reveal that the PCA-neural network is performing better than the neural network in all of the four seasons.

  18. Principal Component Analysis of breast DCE-MRI Adjusted with a Model Based Method

    PubMed Central

    Eyal, Erez.; Badikhi, Daria; Furman-Haran, Edna; Kelcz, Fredrick; Kirshenbaum, Kevin J.; Degani, Hadassa

    2010-01-01

    Purpose To investigate a fast, objective and standardized method for analyzing breast DCE-MRI applying principal component analysis (PCA) adjusted with a model based method. Materials and Methods 3D gradient-echo dynamic contrast-enhanced breast images of 31 malignant and 38 benign lesions, recorded on a 1.5 Tesla scanner were retrospectively analyzed by PCA and by the model based three-time-point (3TP) method. Results Intensity scaled (IS) and enhancement scaled (ES) datasets were reduced by PCA yielding a 1st IS-eigenvector that captured the signal variation between fat and fibroglandular tissue; two IS-eigenvectors and the two first ES-eigenvectors that captured contrast-enhanced changes, whereas the remaining eigenvectors captured predominantly noise changes. Rotation of the two contrast related eigenvectors led to a high congruence between the projection coefficients and the 3TP parameters. The ES-eigenvectors and the rotation angle were highly reproducible across malignant lesions enabling calculation of a general rotated eigenvector base. ROC curve analysis of the projection coefficients of the two eigenvectors indicated high sensitivity of the 1st rotated eigenvector to detect lesions (AUC>0.97) and of the 2nd rotated eigenvector to differentiate malignancy from benignancy (AUC=0.87). Conclusion PCA adjusted with a model-based method provided a fast and objective computer-aided diagnostic tool for breast DCE-MRI. PMID:19856419

  19. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  20. The use of principal component and cluster analysis to differentiate banana peel flours based on their starch and dietary fibre components.

    PubMed

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-08-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193

  1. A component mode synthesis based hybrid method for the dynamic analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Roibás Millán, E.; Chimeno Manguán, M.; Simón Hidalgo, F.

    2015-11-01

    A hybrid method is presented for predicting the dynamic response of complex systems across a broad frequency range. In the mid-frequency range it is quite common to find a mixture of long wavelength motion, global modes, which spans several sub-structures, together with weakly phase correlated local motion, local modes, that is confined to individual sub-structures. In this work, the use of a Component Mode Synthesis allows us to relate Finite Element Method sub-structuring with the modes location within the different sub-structures defined in a Statistical Energy Analysis model. The method proposed here, the Hybrid Analysis based on Component Mode Synthesis sub-structuring (HA-CMS) method provides a greater flexibility defining the applicability range of each one of the calculation methods. Deterministic description of the global behaviour of the system is combined with a statistical description of the local one, taking into account the energy transfer between global and local scales. The application of the HA-CMS method is illustrated with a numerical validation example.

  2. Impact-acoustics-based health monitoring of tile-wall bonding integrity using principal component analysis

    NASA Astrophysics Data System (ADS)

    Tong, F.; Tso, S. K.; Hung, M. Y. Y.

    2006-06-01

    The use of the acoustic features extracted from the impact sounds for bonding integrity assessment has been extensively investigated. Nonetheless, considering the practical implementation of tile-wall non-destructive evaluation (NDE), the traditional defects classification method based directly on frequency-domain features has been of limited application because of the overlapping feature patterns corresponding to different classes whenever there is physical surface irregularity. The purpose of this paper is to explore the clustering and classification ability of principal component analysis (PCA) as applied to the impact-acoustics signature in tile-wall inspection with a view to mitigating the adverse influence of surface non-uniformity. A clustering analysis with signature acquired on sample slabs shows that impact-acoustics signatures of different bonding quality and different surface roughness are well separated into different clusters when using the first two principal components obtained. By adopting as inputs the feature vectors extracted with PCA applied, a multilayer back-propagation artificial neural network (ANN) classifier is developed for automatic health monitoring and defects classification of tile-walls. The inspection results obtained experimentally on the prepared sample slabs are presented and discussed, confirming the utility of the proposed method, particularly in dealing with tile surface irregularity.

  3. Discriminant Incoherent Component Analysis.

    PubMed

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings. PMID:27008268

  4. PATHWAY-BASED ANALYSIS FOR GENOME-WIDE ASSOCIATION STUDIES USING SUPERVISED PRINCIPAL COMPONENTS

    PubMed Central

    Chen, Xi; Wang, Lily; Hu, Bo; Guo, Mingsheng; Barnard, John; Zhu, Xiaofeng

    2012-01-01

    Many complex diseases are influenced by genetic variations in multiple genes, each with only a small marginal effect on disease susceptibility. Pathway analysis, which identifies biological pathways associated with disease outcome, has become increasingly popular for genome-wide association studies (GWAS). In addition to combining weak signals from a number of SNPs in the same pathway, results from pathway analysis also shed light on the biological processes underlying disease. We propose a new pathway-based analysis method for GWAS, the supervised principal component analysis (SPCA) model. In the proposed SPCA model, a selected subset of SNPs most associated with disease outcome is used to estimate the latent variable for a pathway. The estimated latent variable for each pathway is an optimal linear combination of a selected subset of SNPs; therefore, the proposed SPCA model provides the ability to borrow strength across the SNPs in a pathway. In addition to identifying pathways associated with disease outcome, SPCA also carries out additional within-category selection to identify the most important SNPs within each gene set. The proposed model operates in a well-established statistical framework and can handle design information such as covariate adjustment and matching information in GWAS. We compare the proposed method with currently available methods using data with realistic linkage disequilibrium structures and we illustrate the SPCA method using the Wellcome Trust Case-Control Consortium Crohn Disease (CD) dataset. PMID:20842628

  5. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    PubMed Central

    Li, Shanshan; Chen, Shaojie; Yue, Chen; Caffo, Brian

    2016-01-01

    Independent Component analysis (ICA) is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks. PMID:26858592

  6. A multi-fault diagnosis method for sensor systems based on principle component analysis.

    PubMed

    Zhu, Daqi; Bai, Jie; Yang, Simon X

    2010-01-01

    A model based on PCA (principal component analysis) and a neural network is proposed for the multi-fault diagnosis of sensor systems. Firstly, predicted values of sensors are computed by using historical data measured under fault-free conditions and a PCA model. Secondly, the squared prediction error (SPE) of the sensor system is calculated. A fault can then be detected when the SPE suddenly increases. If more than one sensor in the system is out of order, after combining different sensors and reconstructing the signals of combined sensors, the SPE is calculated to locate the faulty sensors. Finally, the feasibility and effectiveness of the proposed method is demonstrated by simulation and comparison studies, in which two sensors in the system are out of order at the same time. PMID:22315537

  7. An Image Reconstruction Algorithm for Electrical Capacitance Tomography Based on Robust Principle Component Analysis

    PubMed Central

    Lei, Jing; Liu, Shi; Wang, Xueyao; Liu, Qibin

    2013-01-01

    Electrical capacitance tomography (ECT) attempts to reconstruct the permittivity distribution of the cross-section of measurement objects from the capacitance measurement data, in which reconstruction algorithms play a crucial role in real applications. Based on the robust principal component analysis (RPCA) method, a dynamic reconstruction model that utilizes the multiple measurement vectors is presented in this paper, in which the evolution process of a dynamic object is considered as a sequence of images with different temporal sparse deviations from a common background. An objective functional that simultaneously considers the temporal constraint and the spatial constraint is proposed, where the images are reconstructed by a batching pattern. An iteration scheme that integrates the advantages of the alternating direction iteration optimization (ADIO) method and the forward-backward splitting (FBS) technique is developed for solving the proposed objective functional. Numerical simulations are implemented to validate the feasibility of the proposed algorithm. PMID:23385418

  8. A remote sensing image fusion method based on feedback sparse component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jindong; Yu, Xianchuan; Pei, Wenjing; Hu, Dan; Zhang, Libao

    2015-12-01

    We propose a new remote sensing image (RSI) fusion technique based on sparse blind source separation theory. Our method employs feedback sparse component analysis (FSCA), which can extract the original image in a step-by-step manner and is robust against noise. For RSIs from the China-Brazil Earth Resources Satellite, FSCA can separate useful surface feature information from redundant information and noise. The FSCA algorithm is therefore used to develop two RSI fusion schemes: one focuses on fusing high-resolution and multi-spectral images, while the other fuses synthetic aperture radar bands. The experimental results show that the proposed method can preserve spectral and spatial details of the source images. For certain evaluation indexes, our method performs better than classical fusion methods.

  9. Sex-based differences in lifting technique under increasing load conditions: A principal component analysis.

    PubMed

    Sheppard, P S; Stevenson, J M; Graham, R B

    2016-05-01

    The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. PMID:26851478

  10. Contact- and distance-based principal component analysis of protein dynamics

    NASA Astrophysics Data System (ADS)

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-01

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between Cα-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  11. Contact- and distance-based principal component analysis of protein dynamics

    SciTech Connect

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  12. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  13. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  14. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    PubMed Central

    2010-01-01

    Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036

  15. Robust principal component analysis-based four-dimensional computed tomography

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.

  16. Robust principal component analysis-based four-dimensional computed tomography

    PubMed Central

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2012-01-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the ‘background’ or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the ‘motion’ or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490

  17. Robust principal component analysis-based four-dimensional computed tomography.

    PubMed

    Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai

    2011-06-01

    The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490

  18. Integrating functional genomics data using maximum likelihood based simultaneous component analysis

    PubMed Central

    van den Berg, Robert A; Van Mechelen, Iven; Wilderjans, Tom F; Van Deun, Katrijn; Kiers, Henk AL; Smilde, Age K

    2009-01-01

    Background In contemporary biology, complex biological processes are increasingly studied by collecting and analyzing measurements of the same entities that are collected with different analytical platforms. Such data comprise a number of data blocks that are coupled via a common mode. The goal of collecting this type of data is to discover biological mechanisms that underlie the behavior of the variables in the different data blocks. The simultaneous component analysis (SCA) family of data analysis methods is suited for this task. However, a SCA may be hampered by the data blocks being subjected to different amounts of measurement error, or noise. To unveil the true mechanisms underlying the data, it could be fruitful to take noise heterogeneity into consideration in the data analysis. Maximum likelihood based SCA (MxLSCA-P) was developed for this purpose. In a previous simulation study it outperformed normal SCA-P. This previous study, however, did not mimic in many respects typical functional genomics data sets, such as, data blocks coupled via the experimental mode, more variables than experimental units, and medium to high correlations between variables. Here, we present a new simulation study in which the usefulness of MxLSCA-P compared to ordinary SCA-P is evaluated within a typical functional genomics setting. Subsequently, the performance of the two methods is evaluated by analysis of a real life Escherichia coli metabolomics data set. Results In the simulation study, MxLSCA-P outperforms SCA-P in terms of recovery of the true underlying scores of the common mode and of the true values underlying the data entries. MxLSCA-P further performed especially better when the simulated data blocks were subject to different noise levels. In the analysis of an E. coli metabolomics data set, MxLSCA-P provided a slightly better and more consistent interpretation. Conclusion MxLSCA-P is a promising addition to the SCA family. The analysis of coupled functional genomics

  19. Raman Based Process Monitor For Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, Samuel A.; Levitskaia, Tatiana G.; Schlahta, Stephan N.

    2008-05-27

    ABSTRACT A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval.

  20. Raman Based Process Monitor for Continuous Real-Time Analysis Of High Level Radioactive Waste Components

    SciTech Connect

    Bryan, S.; Levitskaia, T.; Schlahta, St.

    2008-07-01

    A new monitoring system was developed at Pacific Northwest National Laboratory (PNNL) to quickly generate real-time data/analysis to facilitate a timely response to the dynamic characteristics of a radioactive high level waste stream. The developed process monitor features Raman and Coriolis/conductivity instrumentation configured for the remote monitoring, MatLab-based chemometric data processing, and comprehensive software for data acquisition/storage/archiving/display. The monitoring system is capable of simultaneously and continuously quantifying the levels of all the chemically significant anions within the waste stream including nitrate, nitrite, phosphate, carbonate, chromate, hydroxide, sulfate, and aluminate. The total sodium ion concentration was also determined independently by modeling inputs from on-line conductivity and density meters. In addition to the chemical information, this monitoring system provides immediate real-time data on the flow parameters, such as flow rate and temperature, and cumulative mass/volume of the retrieved waste stream. The components and analytical tools of the new process monitor can be tailored for a variety of complex mixtures in chemically harsh environments, such as pulp and paper processing liquids, electroplating solutions, and radioactive tank wastes. The developed monitoring system was tested for acceptability before it was deployed for use in Hanford Tank S-109 retrieval activities. The acceptance tests included performance inspection of hardware, software, and chemometric data analysis to determine the expected measurement accuracy for the different chemical species that are encountered during S-109 retrieval. (authors)

  1. SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components

    SciTech Connect

    Wendt, R

    2014-06-15

    Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research

  2. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  3. Efficient blind dereverberation and echo cancellation based on independent component analysis for actual acoustic signals.

    PubMed

    Takeda, Ryu; Nakadai, Kazuhiro; Takahashi, Toru; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G

    2012-01-01

    This letter presents a new algorithm for blind dereverberation and echo cancellation based on independent component analysis (ICA) for actual acoustic signals. We focus on frequency domain ICA (FD-ICA) because its computational cost and speed of learning convergence are sufficiently reasonable for practical applications such as hands-free speech recognition. In applying conventional FD-ICA as a preprocessing of automatic speech recognition in noisy environments, one of the most critical problems is how to cope with reverberations. To extract a clean signal from the reverberant observation, we model the separation process in the short-time Fourier transform domain and apply the multiple input/output inverse-filtering theorem (MINT) to the FD-ICA separation model. A naive implementation of this method is computationally expensive, because its time complexity is the second order of reverberation time. Therefore, the main issue in dereverberation is to reduce the high computational cost of ICA. In this letter, we reduce the computational complexity to the linear order of the reverberation time by using two techniques: (1) a separation model based on the independence of delayed observed signals with MINT and (2) spatial sphering for preprocessing. Experiments show that the computational cost grows in proportion to the linear order of the reverberation time and that our method improves the word correctness of automatic speech recognition by 10 to 20 points in a RT₂₀= 670 ms reverberant environment. PMID:22023192

  4. Quantitative performance evaluation of a blurring restoration algorithm based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Huebner, Claudia; Marchi, Gabriele

    2008-10-01

    In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.

  5. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  6. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  7. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods. PMID:26977692

  8. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    SciTech Connect

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference based on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.

  9. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    PubMed Central

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  10. Monitoring of an industrial process by multivariate control charts based on principal component analysis.

    PubMed

    Marengo, Emilio; Gennaro, Maria Carla; Gianotti, Valentina; Robotti, Elisa

    2003-01-01

    The control and monitoring of an industrial process is performed in this paper by the multivariate control charts. The process analysed consists of the bottling of the entire production of 1999 of the sparkling wine "Asti Spumante". This process is characterised by a great number of variables that can be treated with multivariate techniques. The monitoring of the process performed with classical Shewhart charts is very dangerous because they do not take into account the presence of functional relationships between the variables. The industrial process was firstly analysed by multivariate control charts based on Principal Component Analysis. This approach allowed the identification of problems in the process and of their causes. Successively, the SMART Charts (Simultaneous Scores Monitoring And Residual Tracking) were built in order to study the process in its whole. In spite of the successful identification of the presence of problems in the monitored process, the Smart chart did not allow an easy identification of the special causes of variation which casued the problems themselves. PMID:12911145

  11. Cistanches identification based on fluorescent spectral imaging technology combined with principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Dong, Jia; Huang, Furong; Li, Yuanpeng; Xiao, Chi; Xian, Ruiyi; Ma, Zhiguo

    2015-03-01

    In this study, fluorescent spectral imaging technology combined with principal component analysis (PCA) and artificial neural networks (ANNs) was used to identify Cistanche deserticola, Cistanche tubulosa and Cistanche sinensis, which are traditional Chinese medicinal herbs. The fluorescence spectroscopy imaging system acquired the spectral images of 40 cistanche samples, and through image denoising, binarization processing to make sure the effective pixels. Furthermore, drew the spectral curves whose data in the wavelength range of 450-680 nm for the study. Then preprocessed the data by first-order derivative, analyzed the data through principal component analysis and artificial neural network. The results shows: Principal component analysis can generally distinguish cistanches, through further identification by neural networks makes the results more accurate, the correct rate of the testing and training sets is as high as 100%. Based on the fluorescence spectral imaging technique and combined with principal component analysis and artificial neural network to identify cistanches is feasible.

  12. Towards Zero Retraining for Myoelectric Control Based on Common Model Component Analysis.

    PubMed

    Liu, Jianwei; Sheng, Xinjun; Zhang, Dingguo; Jiang, Ning; Zhu, Xiangyang

    2016-04-01

    In spite of several decades of intense research and development, the existing algorithms of myoelectric pattern recognition (MPR) are yet to satisfy the criteria that a practical upper extremity prostheses should fulfill. This study focuses on the criterion of the short, or even zero subject training. Due to the inherent nonstationarity in surface electromyography (sEMG) signals, current myoelectric control algorithms usually need to be retrained daily during a multiple days' usage. This study was conducted based on the hypothesis that there exist some invariant characteristics in the sEMG signals when a subject performs the same motion in different days. Therefore, given a set of classifiers (models) trained on several days, it is possible to find common characteristics among them. To this end, we proposed to use common model component analysis (CMCA) framework, in which an optimized projection was found to minimize the dissimilarity among multiple models of linear discriminant analysis (LDA) trained using data from different days. Five intact-limbed subjects and two transradial amputee subjects participated in an experiment including six sessions of sEMG data recording, which were performed in six different days, to simulate the application of MPR over multiple days. The results demonstrate that CMCA has a significant better generalization ability with unseen data (not included in the training data), leading to classification accuracy improvement and increase of completion rate in a motion test simulation, when comparing with the baseline reference method. The results indicate that CMCA holds a great potential in the effort of developing zero retraining of MPR. PMID:25879963

  13. Lippia origanoides chemotype differentiation based on essential oil GC-MS and principal component analysis.

    PubMed

    Stashenko, Elena E; Martínez, Jairo R; Ruíz, Carlos A; Arias, Ginna; Durán, Camilo; Salgar, William; Cala, Mónica

    2010-01-01

    Chromatographic (GC/flame ionization detection, GC/MS) and statistical analyses were applied to the study of essential oils and extracts obtained from flowers, leaves, and stems of Lippia origanoides plants, growing wild in different Colombian regions. Retention indices, mass spectra, and standard substances were used in the identification of 139 substances detected in these essential oils and extracts. Principal component analysis allowed L. origanoides classification into three chemotypes, characterized according to their essential oil major components. Alpha- and beta-phellandrenes, p-cymene, and limonene distinguished chemotype A; carvacrol and thymol were the distinctive major components of chemotypes B and C, respectively. Pinocembrin (5,7-dihydroxyflavanone) was found in L. origanoides chemotype A supercritical fluid (CO(2)) extract at a concentration of 0.83+/-0.03 mg/g of dry plant material, which makes this plant an interesting source of an important bioactive flavanone with diverse potential applications in cosmetic, food, and pharmaceutical products. PMID:19950347

  14. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  15. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  16. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  17. Envelope extraction based dimension reduction for independent component analysis in fault diagnosis of rolling element bearing

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Na, Jing; Li, Bin; Fung, Rong-Fong

    2014-06-01

    A robust feature extraction scheme for the rolling element bearing (REB) fault diagnosis is proposed by combining the envelope extraction and the independent component analysis (ICA). In the present approach, the envelope extraction is not only utilized to obtain the impulsive component corresponding to the faults from the REB, but also to reduce the dimension of vibration sources included in the sensor-picked signals. Consequently, the difficulty for applying the ICA algorithm under the conditions that the sensor number is limited and the source number is unknown can be successfully eliminated. Then, the ICA algorithm is employed to separate the envelopes according to the independence of vibration sources. Finally, the vibration features related to the REB faults can be separated from disturbances and clearly exposed by the envelope spectrum. Simulations and experimental tests are conducted to validate the proposed method.

  18. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  19. Short prokaryotic DNA fragment binning using a hierarchical classifier based on linear discriminant analysis and principal component analysis.

    PubMed

    Zheng, Hao; Wu, Hongwei

    2010-12-01

    Metagenomics is an emerging field in which the power of genomic analysis is applied to an entire microbial community, bypassing the need to isolate and culture individual microbial species. Assembling of metagenomic DNA fragments is very much like the overlap-layout-consensus procedure for assembling isolated genomes, but is augmented by an additional binning step to differentiate scaffolds, contigs and unassembled reads into various taxonomic groups. In this paper, we employed n-mer oligonucleotide frequencies as the features and developed a hierarchical classifier (PCAHIER) for binning short (≤ 1,000 bps) metagenomic fragments. The principal component analysis was used to reduce the high dimensionality of the feature space. The hierarchical classifier consists of four layers of local classifiers that are implemented based on the linear discriminant analysis. These local classifiers are responsible for binning prokaryotic DNA fragments into superkingdoms, of the same superkingdom into phyla, of the same phylum into genera, and of the same genus into species, respectively. We evaluated the performance of the PCAHIER by using our own simulated data sets as well as the widely used simHC synthetic metagenome data set from the IMG/M system. The effectiveness of the PCAHIER was demonstrated through comparisons against a non-hierarchical classifier, and two existing binning algorithms (TETRA and Phylopythia). PMID:21121023

  20. FPGA-based real-time blind source separation with principal component analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Matthew; Meyer-Baese, Uwe

    2015-05-01

    Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.

  1. Efficient uncertainty quantification in stochastic finite element analysis based on functional principal components

    NASA Astrophysics Data System (ADS)

    Bianchini, Ilaria; Argiento, Raffaele; Auricchio, Ferdinando; Lanzarone, Ettore

    2015-09-01

    The great influence of uncertainties on the behavior of physical systems has always drawn attention to the importance of a stochastic approach to engineering problems. Accordingly, in this paper, we address the problem of solving a Finite Element analysis in the presence of uncertain parameters. We consider an approach in which several solutions of the problem are obtained in correspondence of parameters samples, and propose a novel non-intrusive method, which exploits the functional principal component analysis, to get acceptable computational efforts. Indeed, the proposed approach allows constructing an optimal basis of the solutions space and projecting the full Finite Element problem into a smaller space spanned by this basis. Even if solving the problem in this reduced space is computationally convenient, very good approximations are obtained by upper bounding the error between the full Finite Element solution and the reduced one. Finally, we assess the applicability of the proposed approach through different test cases, obtaining satisfactory results.

  2. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  3. Online Handwritten Signature Verification Using Neural Network Classifier Based on Principal Component Analysis

    PubMed Central

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  4. Online handwritten signature verification using neural network classifier based on principal component analysis.

    PubMed

    Iranmanesh, Vahab; Ahmad, Sharifah Mumtazah Syed; Adnan, Wan Azizun Wan; Yussof, Salman; Arigbabu, Olasimbo Ayodeji; Malallah, Fahad Layth

    2014-01-01

    One of the main difficulties in designing online signature verification (OSV) system is to find the most distinctive features with high discriminating capabilities for the verification, particularly, with regard to the high variability which is inherent in genuine handwritten signatures, coupled with the possibility of skilled forgeries having close resemblance to the original counterparts. In this paper, we proposed a systematic approach to online signature verification through the use of multilayer perceptron (MLP) on a subset of principal component analysis (PCA) features. The proposed approach illustrates a feature selection technique on the usually discarded information from PCA computation, which can be significant in attaining reduced error rates. The experiment is performed using 4000 signature samples from SIGMA database, which yielded a false acceptance rate (FAR) of 7.4% and a false rejection rate (FRR) of 6.4%. PMID:25133227

  5. Independent component analysis based channel equalization for 6 × 6 MIMO-OFDM transmission over few-mode fiber.

    PubMed

    He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua

    2016-05-01

    We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs. PMID:27137537

  6. Large sample inference for a win ratio analysis of a composite outcome based on prioritized components.

    PubMed

    Bebu, Ionut; Lachin, John M

    2016-01-01

    Composite outcomes are common in clinical trials, especially for multiple time-to-event outcomes (endpoints). The standard approach that uses the time to the first outcome event has important limitations. Several alternative approaches have been proposed to compare treatment versus control, including the proportion in favor of treatment and the win ratio. Herein, we construct tests of significance and confidence intervals in the context of composite outcomes based on prioritized components using the large sample distribution of certain multivariate multi-sample U-statistics. This non-parametric approach provides a general inference for both the proportion in favor of treatment and the win ratio, and can be extended to stratified analyses and the comparison of more than two groups. The proposed methods are illustrated with time-to-event outcomes data from a clinical trial. PMID:26353896

  7. The analysis of normative requirements to materials of VVER components, basing on LBB concepts

    SciTech Connect

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-04-01

    The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF and USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.

  8. Study of T-wave morphology parameters based on Principal Components Analysis during acute myocardial ischemia

    NASA Astrophysics Data System (ADS)

    Baglivo, Fabricio Hugo; Arini, Pedro David

    2011-12-01

    Electrocardiographic repolarization abnormalities can be detected by Principal Components Analysis of the T-wave. In this work we studied the efect of signal averaging on the mean value and reproducibility of the ratio of the 2nd to the 1st eigenvalue of T-wave (T21W) and the absolute and relative T-wave residuum (TrelWR and TabsWR) in the ECG during ischemia induced by Percutaneous Coronary Intervention. Also, the intra-subject and inter-subject variability of T-wave parameters have been analyzed. Results showed that TrelWR and TabsWR evaluated from the average of 10 complexes had lower values and higher reproducibility than those obtained from 1 complex. On the other hand T21W calculated from 10 complexes did not show statistical diferences versus the T21W calculated on single beats. The results of this study corroborate that, with a signal averaging technique, the 2nd and the 1st eigenvalue are not afected by noise while the 4th to 8th eigenvalues are so much afected by this, suggesting the use of the signal averaged technique before calculation of absolute and relative T-wave residuum. Finally, we have shown that T-wave morphology parameters present high intra-subject stability.

  9. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix

    PubMed Central

    Gao, H; Zhang, T; Wu, Y; Wu, Y; Jiang, L; Zhan, J; Li, J; Yang, R

    2014-01-01

    Given the drawbacks of implementing multivariate analysis for mapping multiple traits in genome-wide association study (GWAS), principal component analysis (PCA) has been widely used to generate independent ‘super traits' from the original multivariate phenotypic traits for the univariate analysis. However, parameter estimates in this framework may not be the same as those from the joint analysis of all traits, leading to spurious linkage results. In this paper, we propose to perform the PCA for residual covariance matrix instead of the phenotypical covariance matrix, based on which multiple traits are transformed to a group of pseudo principal components. The PCA for residual covariance matrix allows analyzing each pseudo principal component separately. In addition, all parameter estimates are equivalent to those obtained from the joint multivariate analysis under a linear transformation. However, a fast least absolute shrinkage and selection operator (LASSO) for estimating the sparse oversaturated genetic model greatly reduces the computational costs of this procedure. Extensive simulations show statistical and computational efficiencies of the proposed method. We illustrate this method in a GWAS for 20 slaughtering traits and meat quality traits in beef cattle. PMID:24984606

  10. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012). PMID:23162439

  11. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    NASA Astrophysics Data System (ADS)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  12. Analysis of the mineral acid-base components of acid-neutralizing capacity in Adirondack Lakes

    NASA Astrophysics Data System (ADS)

    Munson, R. K.; Gherini, S. A.

    1993-04-01

    Mineral acids and bases influence pH largely through their effects on acid-neutralizing capacity (ANC). This influence becomes particularly significant as ANC approaches zero. Analysis of data collected by the Adirondack Lakes Survey Corporation (ALSC) from 1469 lakes throughout the Adirondack region indicates that variations in ANC in these lakes correlate well with base cation concentrations (CB), but not with the sum of mineral acid anion concentrations (CA). This is because (CA) is relatively constant across the Adirondacks, whereas CB varies widely. Processes that supply base cations to solution are ion-specific. Sodium and silica concentrations are well correlated, indicating a common source, mineral weathering. Calcium and magnesium also covary but do not correlate well with silica. This indicates that ion exchange is a significant source of these cations in the absence of carbonate minerals. Iron and manganese concentrations are elevated in the lower waters of some lakes due to reducing conditions. This leads to an ephemeral increase in CB and ANC. When the lakes mix and oxic conditions are restored, these ions largely precipitate from solution. Sulfate is the dominant mineral acid anion in ALSC lakes. Sulfate concentrations are lowest in seepage lakes, commonly about 40 μeq/L less than in drainage lakes. This is due in part to the longer hydraulic detention time in seepage lakes, which allows slow sulfate reduction reactions more time to decrease lake sulfate concentration. Nitrate typically influences ANC during events such as snowmelt. Chloride concentrations are generally low, except in lakes impacted by road salt.

  13. Design and Analysis of a Novel Six-Component F/T Sensor based on CPM for Passive Compliant Assembly

    NASA Astrophysics Data System (ADS)

    Liang, Qiaokang; Zhang, Dan; Wang, Yaonan; Ge, Yunjian

    2013-10-01

    This paper presents the design and analysis of a six-component Force/Torque (F/T) sensor whose design is based on the mechanism of the Compliant Parallel Mechanism (CPM). The force sensor is used to measure forces along the x-, y-, and z-axis (Fx, Fy and Fz) and moments about the x-, y-, and z-axis (Mx, My and Mz) simultaneously and to provide passive compliance during parts handling and assembly. Particularly, the structural design, the details of the measuring principle and the kinematics are presented. Afterwards, based on the Design of Experiments (DOE) approach provided by the software ANSYS®, a Finite Element Analysis (FEA) is performed. This analysis is performed with the objective of achieving both high sensitivity and isotropy of the sensor. The results of FEA show that the proposed sensor possesses high performance and robustness.

  14. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, H.D.; Ayers, C.W.; Casada, D.A.

    1996-11-26

    A new instrument is described for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician. 4 figs.

  15. Instrument for analysis of electric motors based on slip-poles component

    DOEpatents

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  16. A Class-Information-Based Sparse Component Analysis Method to Identify Differentially Expressed Genes on RNA-Seq Data.

    PubMed

    Liu, Jin-Xing; Xu, Yong; Gao, Ying-Lian; Zheng, Chun-Hou; Wang, Dong; Zhu, Qi

    2016-01-01

    With the development of deep sequencing technologies, many RNA-Seq data have been generated. Researchers have proposed many methods based on the sparse theory to identify the differentially expressed genes from these data. In order to improve the performance of sparse principal component analysis, in this paper, we propose a novel class-information-based sparse component analysis (CISCA) method which introduces the class information via a total scatter matrix. First, CISCA normalizes the RNA-Seq data by using a Poisson model to obtain their differential sections. Second, the total scatter matrix is gotten by combining the between-class and within-class scatter matrices. Third, we decompose the total scatter matrix by using singular value decomposition and construct a new data matrix by using singular values and left singular vectors. Then, aiming at obtaining sparse components, CISCA decomposes the constructed data matrix by solving an optimization problem with sparse constraints on loading vectors. Finally, the differentially expressed genes are identified by using the sparse loading vectors. The results on simulation and real RNA-Seq data demonstrate that our method is effective and suitable for analyzing these data. PMID:27045835

  17. Design and Validation of a Morphing Myoelectric Hand Posture Controller Based on Principal Component Analysis of Human Grasping

    PubMed Central

    Segil, Jacob L.; Weir, Richard F. ff.

    2015-01-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately. PMID:23649286

  18. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  19. Development of a new signal processing algorithm based on independent component analysis for single channel ECG data.

    PubMed

    Lee, J; Lee, K J; Yoo, S K

    2004-01-01

    In this paper, we proposed a new signal processing algorithm based on independent component analysis (ICA) for single channel ECG data. For the application ICA to single channel data, mixed (multi-channel) signals are constructed by adding some delay to original data. By ICA, signal enhancement is acquired. For validation of usefulness of this signal, QRS complex detection was accompanied. In QRS detection process, Hilbert transform and wavelet transform were used and good QRS detection efficacy was obtained. Furthermore, a signal, which could not be filtered properly using existing algorithm, also had better signal enhancement. In future, we need to study on the algorithm optimization and simplification. PMID:17271650

  20. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    SciTech Connect

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  1. A review of component analysis based on magnetization curves: state-of-the art and future developments.

    NASA Astrophysics Data System (ADS)

    Egli, R.

    2005-05-01

    Rocks and sediments inevitably contain mixtures of magnetic minerals, grain sizes, and weathering states. Most rock magnetic interpretation techniques rely on a set of value parameters, such as susceptibility and isothermal/anhysteretic remanent magnetization (ARM or IRM). These parameters are usually interpreted in terms of mineralogy and domain state of the magnetic particles. In some cases, such interpretation of natural samples can be misleading or inconclusive. A less constrained approach to magnetic mineralogy models is based on the analysis of magnetization curves, which are decomposed into a set of elementary contributions. Each contribution is called a magnetic component, and characterizes a specific set of magnetic grains with a unimodal distribution of physical and chemical properties. Magnetic components are related to specific biogeochemical signatures rather than representing traditional categories, such as SD magnetite. This unconventional approach gives a direct link to the interpretation of natural processes on a multidisciplinary level. Despite the aforementioned advantages, component analysis is not yet come into wide use for three reasons: 1) the lack of quantitative magnetic models for natural, non-ideal magnetic grains and/or the statistical distribution of their properties, 2) the intrinsic mathematical complexity of unmixing problems, and 3) the need of accurate measurements that are beyond the usual standards. Since magnetic components rarely occur alone in natural samples, unmixing techniques and rock magnetic models are interdependent. A big effort has been recently undertaken to verify the basic properties of magnetization curves and obtain useful and reliable solutions of the unmixing problem. The result of this experience is a collection of a few hundred magnetic components identified in various natural environments. The properties of these components are controlled by their biogeochemical history, regardless of the provenance of the

  2. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    PubMed Central

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-01-01

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622

  3. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-01-01

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622

  4. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  5. ECG-based gating in ultra high field cardiovascular magnetic resonance using an independent component analysis approach

    PubMed Central

    2013-01-01

    Background In Cardiovascular Magnetic Resonance (CMR), the synchronization of image acquisition with heart motion is performed in clinical practice by processing the electrocardiogram (ECG). The ECG-based synchronization is well established for MR scanners with magnetic fields up to 3 T. However, this technique is prone to errors in ultra high field environments, e.g. in 7 T MR scanners as used in research applications. The high magnetic fields cause severe magnetohydrodynamic (MHD) effects which disturb the ECG signal. Image synchronization is thus less reliable and yields artefacts in CMR images. Methods A strategy based on Independent Component Analysis (ICA) was pursued in this work to enhance the ECG contribution and attenuate the MHD effect. ICA was applied to 12-lead ECG signals recorded inside a 7 T MR scanner. An automatic source identification procedure was proposed to identify an independent component (IC) dominated by the ECG signal. The identified IC was then used for detecting the R-peaks. The presented ICA-based method was compared to other R-peak detection methods using 1) the raw ECG signal, 2) the raw vectorcardiogram (VCG), 3) the state-of-the-art gating technique based on the VCG, 4) an updated version of the VCG-based approach and 5) the ICA of the VCG. Results ECG signals from eight volunteers were recorded inside the MR scanner. Recordings with an overall length of 87 min accounting for 5457 QRS complexes were available for the analysis. The records were divided into a training and a test dataset. In terms of R-peak detection within the test dataset, the proposed ICA-based algorithm achieved a detection performance with an average sensitivity (Se) of 99.2%, a positive predictive value (+P) of 99.1%, with an average trigger delay and jitter of 5.8 ms and 5.0 ms, respectively. Long term stability of the demixing matrix was shown based on two measurements of the same subject, each being separated by one year, whereas an averaged detection

  6. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  7. Interim Progress Report on the Application of an Independent Components Analysis-based Spectral Unmixing Algorithm to Beowulf Computers

    USGS Publications Warehouse

    Lemeshewsky, George

    2003-01-01

    This report describes work done to implement an independent-components-analysis (ICA) -based blind unmixing algorithm on the Eastern Region Geography (ERG) Beowulf computer cluster. It gives a brief description of blind spectral unmixing using ICA-based techniques and a preliminary example of unmixing results for Landsat-7 Thematic Mapper multispectral imagery using a recently reported1,2,3 unmixing algorithm. Also included are computer performance data. The final phase of this work, the actual implementation of the unmixing algorithm on the Beowulf cluster, was not completed this fiscal year and is addressed elsewhere. It is noted that study of this algorithm and its application to land-cover mapping will continue under another research project in the Land Remote Sensing theme into fiscal year 2004.

  8. An Assessment of Hermite Function Based Approximations of Mutual Information Applied to Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Sorensen, Julian

    2008-12-01

    At the heart of many ICA techniques is a nonparametric estimate of an information measure, usually via nonparametric density estimation, for example, kernel density estimation. While not as popular as kernel density estimators, orthogonal functions can be used for nonparametric density estimation (via a truncated series expansion whose coefficients are calculated from the observed data). While such estimators do not necessarily yield a valid density, which kernel density estimators do, they are faster to calculate than kernel density estimators, in particular for a modified version of Renyi's entropy of order 2. In this paper, we compare the performance of ICA using Hermite series based estimates of Shannon's and Renyi's mutual information, to that of Gaussian kernel based estimates. The comparisons also include ICA using the RADICAL estimate of Shannon's entropy and a FastICA estimate of neg-entropy.

  9. Bearing fault recognition method based on neighbourhood component analysis and coupled hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhou, Haitao; Chen, Jin; Dong, Guangming; Wang, Hongchao; Yuan, Haodong

    2016-01-01

    Due to the important role rolling element bearings play in rotating machines, condition monitoring and fault diagnosis system should be established to avoid abrupt breakage during operation. Various features from time, frequency and time-frequency domain are usually used for bearing or machinery condition monitoring. In this study, NCA-based feature extraction (FE) approach is proposed to reduce the dimensionality of original feature set and avoid the "curse of dimensionality". Furthermore, coupled hidden Markov model (CHMM) based on multichannel data acquisition is applied to diagnose bearing or machinery fault. Two case studies are presented to validate the proposed approach both in bearing fault diagnosis and fault severity classification. The experiment results show that the proposed NCA-CHMM can remove redundant information, fuse data from different channels and improve the diagnosis results.

  10. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  11. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  12. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    SciTech Connect

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-07

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of C{sub α} atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  13. Slow dynamics of a protein backbone in molecular dynamics simulation revealed by time-structure based independent component analysis

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2013-12-01

    We recently proposed the method of time-structure based independent component analysis (tICA) to examine the slow dynamics involved in conformational fluctuations of a protein as estimated by molecular dynamics (MD) simulation [Y. Naritomi and S. Fuchigami, J. Chem. Phys. 134, 065101 (2011)]. Our previous study focused on domain motions of the protein and examined its dynamics by using rigid-body domain analysis and tICA. However, the protein changes its conformation not only through domain motions but also by various types of motions involving its backbone and side chains. Some of these motions might occur on a slow time scale: we hypothesize that if so, we could effectively detect and characterize them using tICA. In the present study, we investigated slow dynamics of the protein backbone using MD simulation and tICA. The selected target protein was lysine-, arginine-, ornithine-binding protein (LAO), which comprises two domains and undergoes large domain motions. MD simulation of LAO in explicit water was performed for 1 μs, and the obtained trajectory of Cα atoms in the backbone was analyzed by tICA. This analysis successfully provided us with slow modes for LAO that represented either domain motions or local movements of the backbone. Further analysis elucidated the atomic details of the suggested local motions and confirmed that these motions truly occurred on the expected slow time scale.

  14. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  15. Large Sample Group Independent Component Analysis of Functional Magnetic Resonance Imaging Using Anatomical Atlas-Based Reduction and Bootstrapped Clustering

    PubMed Central

    Anderson, Ariana; Bramen, Jennifer; Douglas, Pamela K.; Lenartowicz, Agatha; Cho, Andrew; Culbertson, Chris; Brody, Arthur L.; Yuille, Alan L.; Cohen, Mark S.

    2011-01-01

    Independent component analysis (ICA) is a popular method for the analysis of functional magnetic resonance imaging (fMRI) signals that is capable of revealing connected brain systems of functional significance. To be computationally tractable, estimating the independent components (ICs) inevitably requires one or more dimension reduction steps. Whereas most algorithms perform such reductions in the time domain, the input data are much more extensive in the spatial domain, and there is broad consensus that the brain obeys rules of localization of function into regions that are smaller in number than the number of voxels in a brain image. These functional units apparently reorganize dynamically into networks under different task conditions. Here we develop a new approach to ICA, producing group results by bagging and clustering over hundreds of pooled single-subject ICA results that have been projected to a lower-dimensional subspace. Averages of anatomically based regions are used to compress the single subject-ICA results prior to clustering and resampling via bagging. The computational advantages of this approach make it possible to perform group-level analyses on datasets consisting of hundreds of scan sessions by combining the results of within-subject analysis, while retaining the theoretical advantage of mimicking what is known of the functional organization of the brain. The result is a compact set of spatial activity patterns that are common and stable across scan sessions and across individuals. Such representations may be used in the context of statistical pattern recognition supporting real-time state classification. PMID:22049263

  16. Equity in health care in Namibia: developing a needs-based resource allocation formula using principal components analysis

    PubMed Central

    Zere, Eyob; Mandlhate, Custodia; Mbeeli, Thomas; Shangula, Kalumbi; Mutirua, Kauto; Kapenambili, William

    2007-01-01

    Background The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Methods Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Results Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. Conclusion To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care. PMID:17391533

  17. Failure Analysis of Ceramic Components

    SciTech Connect

    B.W. Morris

    2000-06-29

    Ceramics are being considered for a wide range of structural applications due to their low density and their ability to retain strength at high temperatures. The inherent brittleness of monolithic ceramics requires a departure from the deterministic design philosophy utilized to analyze metallic structural components. The design program ''Ceramic Analysis and Reliability Evaluation of Structures Life'' (CARES/LIFE) developed by NASA Lewis Research Center uses a probabilistic approach to predict the reliability of monolithic components under operational loading. The objective of this study was to develop an understanding of the theories used by CARES/LIFE to predict the reliability of ceramic components and to assess the ability of CARES/LIFE to accurately predict the fast fracture behavior of monolithic ceramic components. A finite element analysis was performed to determine the temperature and stress distribution of a silicon carbide O-ring under diametral compression. The results of the finite element analysis were supplied as input into CARES/LIFE to determine the fast fracture reliability of the O-ring. Statistical material strength parameters were calculated from four-point flexure bar test data. The predicted reliability showed excellent correlation with O-ring compression test data indicating that the CARES/LIFE program can be used to predict the reliability of ceramic components subjected to complicated stress states using material properties determined from simple uniaxial tensile tests.

  18. Simultaneous multi-wavelength phase-shifting interferometry based on principal component analysis with a color CMOS

    NASA Astrophysics Data System (ADS)

    Fan, Jingping; Lu, Xiaoxu; Xu, Xiaofei; Zhong, Liyun

    2016-05-01

    From a sequence of simultaneous multi-wavelength phase-shifting interferograms (SMWPSIs) recorded by a color CMOS, a principal component analysis (PCA) based multi-wavelength interferometry (MWI) is proposed. First, a sequence of SMWPSIs with unknown phase shifts are recorded with a single-chip color CMOS camera. Subsequently, the wrapped phases of single-wavelength are retrieved with the PCA algorithm. Finally, the unambiguous phase of the extended synthetic wavelength is achieved by the subtraction between the wrapped phases of single-wavelength. In addition, to eliminate the additional phase introduced by the microscope and intensity crosstalk among three-color channels, a two-step phase compensation method with and without the measured object in the experimental system is employed. Compared with conventional single-wavelength phase-shifting interferometry, due to no requirements for phase shifts calibration and the phase unwrapping operation, the actual unambiguous phase of the measured object can be achieved with the proposed PCA-based MWI method conveniently. Both numerical simulations and experimental results demonstrate that the proposed PCA-based MWI method can enlarge not only the measuring range, but also no amplification of noise level.

  19. Application of independent component analysis method in real-time spectral analysis of gaseous mixtures for acousto-optical spectrometers based on differential optical absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Fadeyev, A. V.; Pozhar, V. E.

    2012-10-01

    It is discussed the reliability problem of time-optimized method for remote optical spectral analysis of gas-polluted ambient air. The method based on differential optical absorption spectroscopy (DOAS) enables fragmentary spectrum registration (FSR) and is suitable for random-spectral-access (RSA) optical spectrometers like acousto-optical (AO) ones. Here, it is proposed the algorithm based on statistical method of independent component analysis (ICA) for estimation of a correctness of absorption spectral lines selection for FSR-method. Implementations of ICA method for RSA-based real-time adaptive systems are considered. Numerical simulations are presented with use of real spectra detected by the trace gas monitoring system GAOS based on AO spectrometer.

  20. High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Meng, Jing; Jiang, Zibo; Wang, Lihong V.; Park, Jongin; Kim, Chulhong; Sun, Mingjian; Zhang, Yuanke; Song, Liang

    2016-07-01

    Photoacoustic computed tomography (PACT) has emerged as a unique and promising technology for multiscale biomedical imaging. To fully realize its potential for various preclinical and clinical applications, development of systems with high imaging speed, reasonable cost, and manageable data flow are needed. Sparse-sampling PACT with advanced reconstruction algorithms, such as compressed-sensing reconstruction, has shown potential as a solution to this challenge. However, most such algorithms require iterative reconstruction and thus intense computation, which may lead to excessively long image reconstruction times. Here, we developed a principal component analysis (PCA)-based PACT (PCA-PACT) that can rapidly reconstruct high-quality, three-dimensional (3-D) PACT images with sparsely sampled data without requiring an iterative process. In vivo images of the vasculature of a human hand were obtained, thus validating the PCA-PACT method. The results showed that, compared with the back-projection (BP) method, PCA-PACT required ˜50% fewer measurements and ˜40% less time for image reconstruction, and the imaging quality was almost the same as that for BP with full sampling. In addition, compared with compressed sensing-based PACT, PCA-PACT had approximately sevenfold faster imaging speed with higher imaging accuracy. This work suggests a promising approach for low-cost, 3-D, rapid PACT for various biomedical applications.

  1. Ground-roll separation of seismic data based on morphological component analysis in two-dimensional domain

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Hong; Qu, Guang-Zhong; Zhang, Yang; Bi, Yun-Yun; Wang, Jin-Ju

    2016-03-01

    Ground roll is an interference wave that severely degrades the signal-to-noise ratio of seismic data and affects its subsequent processing and interpretation. In this study, according to differences in morphological characteristics between ground roll and reflected waves, we use morphological component analysis based on two-dimensional dictionaries to separate ground roll and reflected waves. Because ground roll is characterized by low-frequency, low-velocity, and dispersion, we select two-dimensional undecimated discrete wavelet transform as a sparse representation dictionary of ground roll. Because of a strong local correlation of the reflected wave, we select two-dimensional local discrete cosine transform as the sparse representation dictionary of reflected waves. A sparse representation model of seismic data is constructed based on a two-dimensional joint dictionary then a block coordinate relaxation algorithm is used to solve the model and decompose seismic record into reflected wave part and ground roll part.The good effects for the synthetic seismic data and application of real seismic data indicate that when using the model, strong-energy ground roll is considerably suppressed and the waveform of the reflected wave is effectively protected.

  2. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  3. Multivariate Principal Component Analysis and Case-Based Reasoning for monitoring, fault detection and diagnosis in a WWTP.

    PubMed

    Ruiz, Magda; Sin, Gürkan; Berjaga, Xavier; Colprim, Jesús; Puig, Sebastià; Colomer, Joan

    2011-01-01

    The main idea of this paper is to develop a methodology for process monitoring, fault detection and predictive diagnosis of a WasteWater Treatment Plant (WWTP). To achieve this goal, a combination of Multiway Principal Component Analysis (MPCA) and Case-Based Reasoning (CBR) is proposed. First, MPCA is used to reduce the multi-dimensional nature of online process data, which summarises most of the variance of the process data in a few (new) variables. Next, the outputs of MPCA (t-scores, Q-statistic) are provided as inputs (descriptors) to the CBR method, which is employed to identify problems and propose appropriate solutions (hence diagnosis) based on previously stored cases. The methodology is evaluated on a pilot-scale SBR performing nitrogen, phosphorus and COD removal and to help to diagnose abnormal situations in the process operation. Finally, it is believed that the methodology is a promising tool for automatic diagnosis and real-time warning, which can be used for daily management of plant operation. PMID:22335109

  4. Source-Based Morphometry: The Use of Independent Component Analysis to Identify Gray Matter Differences With Application to Schizophrenia

    PubMed Central

    Xu, Lai; Groth, Karyn M.; Pearlson, Godfrey; Schretlen, David J.; Calhoun, Vince D.

    2009-01-01

    We present a multivariate alternative to the voxel-based morphometry (VBM) approach called source-based morphometry (SBM), to study gray matter differences between patients and healthy controls. The SBM approach begins with the same preprocessing procedures as VBM. Next, independent component analysis is used to identify naturally grouping, maximally independent sources. Finally, statistical analyses are used to determine the significant sources and their relationship to other variables. The identified “source networks,” groups of spatially distinct regions with common covariation among subjects, provide information about localization of gray matter changes and their variation among individuals. In this study, we first compared VBM and SBM via a simulation and then applied both methods to real data obtained from 120 chronic schizophrenia patients and 120 healthy controls. SBM identified five gray matter sources as significantly associated with schizophrenia. These included sources in the bilateral temporal lobes, thalamus, basal ganglia, parietal lobe, and frontotemporal regions. None of these showed an effect of sex. Two sources in the bilateral temporal and parietal lobes showed age-related reductions. The most significant source of schizophrenia-related gray matter changes identified by SBM occurred in the bilateral temporal lobe, while the most significant change found by VBM occurred in the thalamus. The SBM approach found changes not identified by VBM in basal ganglia, parietal, and occipital lobe. These findings show that SBM is a multivariate alternative to VBM, with wide applicability to studying changes in brain structure. PMID:18266214

  5. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    PubMed

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  6. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    PubMed Central

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  7. Quantitative Profiling of Polar Metabolites in Herbal Medicine Injections for Multivariate Statistical Evaluation Based on Independence Principal Component Analysis

    PubMed Central

    Wang, Yuefei; Xu, Lei; Wang, Meng; Zhao, Buchang; Jia, Lifu; Pan, Hao; Zhu, Yan; Gao, Xiumei

    2014-01-01

    Botanical primary metabolites extensively exist in herbal medicine injections (HMIs), but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR) profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI), was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA), the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs). IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids) of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs. PMID:25157567

  8. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  9. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    PubMed

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species

  10. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  11. Principle component analysis for radiotracer signal separation.

    PubMed

    Kasban, H; Arafa, H; Elaraby, S M S

    2016-06-01

    Radiotracers can be used in several industrial applications by injecting the radiotracer into the industrial system and monitoring the radiation using radiation detectors for obtaining signals. These signals are analyzed to obtain indications about what is happening within the system or to determine the problems that may be present in the system. For multi-phase system analysis, more than one radiotracer is used and the result is a mixture of radiotracers signals. The problem is in such cases is how to separate these signals from each other. The paper presents a proposed method based on Principle Component Analysis (PCA) for separating mixed two radiotracer signals from each other. Two different radiotracers (Technetium-99m (Tc(99m)) and Barium-137m (Ba(137m))) were injected into a physical model for simulation of chemical reactor (PMSCR-MK2) for obtaining the radiotracer signals using radiation detectors and Data Acquisition System (DAS). The radiotracer signals are mixed and signal processing steps are performed include background correction and signal de-noising, then applying the signal separation algorithms. Three separation algorithms have been carried out; time domain based separation algorithm, Independent Component Analysis (ICA) based separation algorithm, and Principal Components Analysis (PCA) based separation algorithm. The results proved the superiority of the PCA based separation algorithm to the other based separation algorithm, and PCA based separation algorithm and the signal processing steps gives a considerable improvement of the separation process. PMID:26974488

  12. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  13. [Analysis and comparison of intestinal absorption of components of Gegenqinlian decoction in different combinations based on pharmacokinetic parameters].

    PubMed

    Zhang, Yi-Zhu; An, Rui; Yuan, Jin; Wang, Yue; Gu, Qing-Qing; Wang, Xin-Hong

    2013-10-01

    To analyse and compare the characteristics of the intestinal absorption of puerarin, baicalin, berberine and liquiritin in different combinations of Gegenqinlian decoction based on pharmacokinetic parameters, a sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) method was applied for the quantification of four components in rat's plasma. And pharmacokinetic parameters were determined from the plasma concentration-time data with the DAS software package. The influence of different combinations on pharmacokinetics of four components was studied to analyse and compare the absorption difference of four components, together with the results of the in vitro everted gut model and the rat single pass intestinal perfusion model. The results showed that compared with other combinations, the AUC values of puerarin, baicalin and berberine were increased significantly in Gegenqinlian decoction group, while the AUC value of liquiritin was reduced. Moreover, the absorption of four components was increased significantly supported by the results from the in vitro everted gut model and the rat single pass intestinal perfusion model, which indicated that the Gegenqinlian decoction may promote the absorption of four components and accelerate the metabolism of liquiritin by the cytochrome P450. PMID:24417090

  14. Nonlinear principal component analysis of climate data

    SciTech Connect

    Boyle, J.; Sengupta, S.

    1995-06-01

    This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.

  15. Component evaluation testing and analysis algorithms.

    SciTech Connect

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  16. A process-based analysis of ocean heat uptake in an AOGCM with an eddy-permitting ocean component

    NASA Astrophysics Data System (ADS)

    Kuhlbrodt, T.; Gregory, J. M.; Shaffrey, L. C.

    2015-12-01

    About 90 % of the anthropogenic increase in heat stored in the climate system is found in the oceans. Therefore it is relevant to understand the details of ocean heat uptake. Here we present a detailed, process-based analysis of ocean heat uptake (OHU) processes in HiGEM1.2, an atmosphere-ocean general circulation model with an eddy-permitting ocean component of 1/3° resolution. Similarly to various other models, HiGEM1.2 shows that the global heat budget is dominated by a downward advection of heat compensated by upward isopycnal diffusion. Only in the upper tropical ocean do we find the classical balance between downward diapycnal diffusion and upward advection of heat. The upward isopycnal diffusion of heat is located mostly in the Southern Ocean, which thus dominates the global heat budget. We compare the responses to a 4xCO2 forcing and an enhancement of the windstress forcing in the Southern Ocean. This highlights the importance of regional processes for the global ocean heat uptake. These are mainly surface fluxes and convection in the high latitudes, and advection in the Southern Ocean mid-latitudes. Changes in diffusion are less important. In line with the CMIP5 models, HiGEM1.2 shows a band of strong OHU in the mid-latitude Southern Ocean in the 4xCO2 run, which is mostly advective. By contrast, in the high-latitude Southern Ocean regions it is the suppression of convection that leads to OHU. In the enhanced windstress run, convection is strengthened at high Southern latitudes, leading to heat loss, while the magnitude of the OHU in the Southern mid-latitudes is very similar to the 4xCO2 results. Remarkably, there is only very small global OHU in the enhanced windstress run. The wind stress forcing just leads to a redistribution of heat. We relate the ocean changes at high Southern latitudes to the effect of climate change on the Antarctic Circumpolar Current (ACC). It weakens in the 4xCO2 run and strengthens in the wind stress run. The weakening is due

  17. System approach to robust acoustic echo cancellation through semi-blind source separation based on independent component analysis

    NASA Astrophysics Data System (ADS)

    Wada, Ted S.

    In this dissertation, we build a foundation for what we refer to as the system approach to signal enhancement as we focus on the acoustic echo cancellation (AEC) problem. Such a “system” perspective aims for the integration of individual components, or algorithms, into a cohesive unit for the benefit of the system as a whole to cope with real-world enhancement problems. The standard system identification approach by minimizing the mean square error (MSE) of a linear system is sensitive to distortions that greatly affect the quality of the identification result. Therefore, we begin by examining in detail the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of the LMS algorithm when there is an interference at the near end, where the source of distortion may be linear or nonlinear. We provide a thorough derivation and analysis of the error recovery nonlinearity (ERN) that “enhances” the filter estimation error prior to the adaptation to transform the corrupted error’s distribution into a desired one, or very close to it, in order to assist the linear adaptation process. We reveal important connections of the residual echo enhancement (REE) technique to other existing AEC and signal enhancement procedures, where the technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. Notably, the single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. Indeed, SBSS optimized via ICA leads to the system combination of the LMS algorithm with the ERN that allows continuous and stable adaptation even during double talk. Next, we extend the system perspective

  18. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  19. Structured Functional Principal Component Analysis

    PubMed Central

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M.; Greven, Sonja

    2015-01-01

    Summary Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  20. Structured functional principal component analysis.

    PubMed

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M; Greven, Sonja

    2015-03-01

    Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  1. Evaluation of the aroma quality of Chinese traditional soy paste during storage based on principal component analysis.

    PubMed

    Peng, Xingyun; Li, Xin; Shi, Xiaodi; Guo, Shuntang

    2014-05-15

    Soy paste, a fermented soybean product, is widely used for flavouring in East and Southeast Asian countries. The characteristic aroma of soy paste is important throughout its shelf life. This study extracted volatile compounds via headspace solid-phase microextraction and conducted a quantitative analysis of 15 key volatile compounds using gas chromatography and gas chromatography-mass spectrum analysis. Changes in aroma content during storage time were analyzed using an acceleration model (40 °C, 28 days). In the 28 days of storage, results showed that among key soy paste volatile compounds, alcohol and aldehyde contents decreased by 35% and 26%, respectively. By contrast, acid, ester, and heterocycle contents increased by 130%, 242%, and 15%, respectively. The overall odour type transformed from a floral to a roasting aroma. According to sample clustering in the principal component analysis, the storage life of soy paste could be divided into three periods. These three periods represent the floral, roasting, and pungent aroma types of soy paste. PMID:24423567

  2. Application of independent component analysis to ac dipole based optics measurement and correction at the Relativistic Heavy Ion Collider

    NASA Astrophysics Data System (ADS)

    Shen, X.; Lee, S. Y.; Bai, M.; White, S.; Robert-Demolaize, G.; Luo, Y.; Marusic, A.; Tomás, R.

    2013-11-01

    Correction of beta-beat is of great importance for performance improvement of high energy accelerators, like the Relativistic Hadron Ion Collider (RHIC). At RHIC, using the independent component analysis method, linear optical functions are extracted from the turn by turn beam position data of the ac dipole driven betatron oscillation. Despite the constraint of a limited number of available quadrupole correctors at RHIC, a global beta-beat correction scheme using a beta-beat response matrix method was developed and experimentally demonstrated. In both rings, a factor of 2 or better reduction of beta-beat was achieved within available beam time. At the same time, a new scheme of using horizontal closed orbit bump at sextupoles to correct beta-beat in the arcs was demonstrated in the Yellow ring of RHIC at beam energy of 255 GeV, and a peak beta-beat of approximately 7% was achieved.

  3. EP component identification and measurement by principal components analysis.

    PubMed

    Chapman, R M; McCrary, J W

    1995-04-01

    Between the acquisition of Evoked Potential (EP) data and their interpretation lies a major problem: What to measure? An approach to this kind of problem is outlined here in terms of Principal Components Analysis (PCA). An important second theme is that experimental manipulation is important to functional interpretation. It would be desirable to have a system of EP measurement with the following characteristics: (1) represent the data in a concise, parsimonous way; (2) determine EP components from the data without assuming in advance any particular waveforms for the components; (3) extract components which are independent of each other; (4) measure the amounts (contributions) of various components in observed EPs; (5) use measures that have greater reliability than measures at any single time point or peak; and (6) identify and measure components that overlap in time. PCA has these desirable characteristics. Simulations are illustrated. PCA's beauty also has some warts that are discussed. In addition to discussing the usual two-mode model of PCA, an extension of PCA to a three-mode model is described that provides separate parameters for (1) waveforms over time, (2) coefficients for spatial distribution, and (3) scores telling the amount of each component in each EP. PCA is compared with more traditional approaches. Some biophysical considerations are briefly discussed. Choices to be made in applying PCA are considered. Other issues include misallocation of variance, overlapping components, validation, and latency changes. PMID:7626278

  4. Global Observations of SO2 and HCHO Using an Innovative Algorithm based on Principal Component Analysis of Satellite Radiance Data

    NASA Astrophysics Data System (ADS)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay; Fioletov, Vitali; McLinden, Chris

    2015-04-01

    We report on the latest progress in the development and application of a new trace gas retrieval algorithm for spaceborne UV-VIS spectrometers. Developed at NASA Goddard Space Flight Center, this algorithm utilizes the principal component analysis (PCA) technique to extract a series of spectral features (principal components or PCs) explaining the variance of measured reflectance spectra. For a species of interests that has no or very small background signals such as SO2 or HCHO, the leading PCs (that explain the most variance) obtained over the clean areas are generally associated with various physical processes (e.g., ozone absorption, rotational Raman scattering) and measurement details (e.g., wavelength shift) other than the signals of interests. By fitting these PCs and pre-computed Jacobians for the target species to a measured radiance spectrum, we can then estimate its atmospheric loading. The PCA algorithm has been operationally implemented to produce the new generation NASA Aura/OMI standard planetary boundary layer (PBL) SO2 product. Comparison with the previous OMI PBL SO2 product indicates that the PCA algorithm reduces the retrieval noise by a factor of two and greatly improves the data quality, allowing detection of smaller point SO2 pollution sources that have not been previously measured from space. We have also demonstrated the algorithm for SO2 retrievals using the new NASA/NOAA S-NPP/OMPS UV spectrometer. For HCHO, the new algorithm shows great promise as evidenced by results obtained from both OMI and OMPS. Finally, we discuss the most recent progress in the algorithm development, including the implementation of a new Jacobians lookup table to more appropriately account for the sensitivity of satellite sensors to various measurement conditions (e.g., viewing geometry, surface reflectance and cloudiness).

  5. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  6. Associated neural network independent component analysis structure

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Kostrzweski, Andrew

    2006-05-01

    Detection, classification, and localization of potential security breaches in extremely high-noise environments are important for perimeter protection and threat detection both for homeland security and for military force protection. Physical Optics Corporation has developed a threat detection system to separate acoustic signatures from unknown, mixed sources embedded in extremely high-noise environments where signal-to-noise ratios (SNRs) are very low. Associated neural network structures based on independent component analysis are designed to detect/separate new acoustic sources and to provide reliability information. The structures are tested through computer simulations for each critical component, including a spontaneous detection algorithm for potential threat detection without a predefined knowledge base, a fast target separation algorithm, and nonparametric methodology for quantified confidence measure. The results show that the method discussed can separate hidden acoustic sources of SNR in 5 dB noisy environments with an accuracy of 80%.

  7. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  8. Adaptive detection method of infrared small target based on target-background separation via robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chuanyun; Qin, Shiyin

    2015-03-01

    Motivated by the robust principal component analysis, infrared small target image is regarded as low-rank background matrix corrupted by sparse target and noise matrices, thus a new target-background separation model is designed, subsequently, an adaptive detection method of infrared small target is presented. Firstly, multi-scale transform and patch transform are used to generate an image patch set for infrared small target detection; secondly, target-background separation of each patch is achieved by recovering the low-rank and sparse matrices using adaptive weighting parameter; thirdly, the image reconstruction and fusion are carried out to obtain the entire separated background and target images; finally, the infrared small target detection is realized by threshold segmentation of template matching similarity measurement. In order to validate the performance of the proposed method, three experiments: target-background separation, background clutter suppression and infrared small target detection, are performed over different clutter background with real infrared small targets in single-frame or sequence images. A series of experiment results demonstrate that the proposed method can not only suppress background clutter effectively even if with strong noise interference but also detect targets accurately with low false alarm rate.

  9. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  10. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  11. The double K+/Ca2+ sensor based on laser scanned silicon transducer (LSST) for multi-component analysis.

    PubMed

    Ermolenko, Yu; Yoshinobu, T; Mourzina, Yu; Furuichi, K; Levichev, S; Schöning, M J; Vlasov, Yu; Iwasaki, H

    2003-03-10

    In the present work a double ion sensor based on a laser scanned semiconductor transducer (LSST) for the simultaneous determination of K(+)- and Ca(2+)-ions in solutions has been developed. Specially elaborated ion-sensitive membrane compositions based on valinomycin and calcium ionophore calcium bis[4-(1,1,3,3-tetramethylbutyl)phenyl] phosphate (t-HDOPP-Ca) were deposited as separate layers on a silanized surface of the Si/SiO(2)/Si(3)N(4)-transducer. The proposed multi-sensor exhibits theoretical sensitivities and the detection limits of the sensor were found to be 2 x 10(-6) mol l(-1) for K(+) and 5 x 10(-6) mol l(-1) for Ca(2+). The elaborated double sensor is proposed for the first time as a prototype of a new type of multi-sensor systems for chemical analysis. PMID:18968966

  12. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  13. Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  14. Skill Components of Task Analysis

    ERIC Educational Resources Information Center

    Adams, Anne E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices' problems with learning Hierarchical Task…

  15. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  16. Fault tree analysis of nuclear power plant components and systems. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    SciTech Connect

    Not Available

    1992-09-01

    The bibliography contains citations concerning risk assessment, reliability analysis, failure analysis, and safety studies of nuclear power plant components and systems using fault tree analysis methods. Faults caused by components, human error, environmental considerations, and common mode failures are presented. Various systems and components are analyzed, including high pressure safety injection, auxiliary feedwater, instrumentation, emergency core flooding and cooling, and steam generator tubing. (Contains a minimum of 59 citations and includes a subject term index and title list.)

  17. Multivariate streamflow forecasting using independent component analysis

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Sharma, Ashish; Brown, Casey; Lall, Upmanu

    2008-02-01

    Seasonal forecasting of streamflow provides many benefits to society, by improving our ability to plan and adapt to changing water supplies. A common approach to developing these forecasts is to use statistical methods that link a set of predictors representing climate state as it relates to historical streamflow, and then using this model to project streamflow one or more seasons in advance based on current or a projected climate state. We present an approach for forecasting multivariate time series using independent component analysis (ICA) to transform the multivariate data to a set of univariate time series that are mutually independent, thereby allowing for the much broader class of univariate models to provide seasonal forecasts for each transformed series. Uncertainty is incorporated by bootstrapping the error component of each univariate model so that the probability distribution of the errors is maintained. Although all analyses are performed on univariate time series, the spatial dependence of the streamflow is captured by applying the inverse ICA transform to the predicted univariate series. We demonstrate the technique on a multivariate streamflow data set in Colombia, South America, by comparing the results to a range of other commonly used forecasting methods. The results show that the ICA-based technique is significantly better at representing spatial dependence, while not resulting in any loss of ability in capturing temporal dependence. As such, the ICA-based technique would be expected to yield considerable advantages when used in a probabilistic setting to manage large reservoir systems with multiple inflows or data collection points.

  18. Finite Element Based Stress Analysis of Graphite Component in High Temperature Gas Cooled Reactor Core Using Linear and Nonlinear Irradiation Creep Models

    SciTech Connect

    Mohanty, Subhasish; Majumdar, Saurindranath

    2015-01-01

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  19. Face Recognition by Independent Component Analysis

    PubMed Central

    Bartlett, Marian Stewart; Movellan, Javier R.; Sejnowski, Terrence J.

    2010-01-01

    A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. PMID:18244540

  20. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  1. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  2. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study

    PubMed Central

    Kreuzer, Stefan W.; Pourmoghaddam, Amir; Leffers, Kevin J.; Johnson, Clint W.; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation. PMID:26881086

  3. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  4. Fatigue analysis codes for WECS components

    SciTech Connect

    Sutherland, H.J.; Ashwill, T.D.; Naassan, K.A.

    1987-10-01

    This Manuscript discusses two numerical techniques, the LIFE and the LIFE2 codes, that analyze the fatigue life of WECS components. The LIFE code is a PC-compatible Basic code that analyzes the fatigue life of a VAWT component. The LIFE2 code is a PC-compatible Fortran code that relaxes the rather restrictive assumptions of the LIFE code and permits the analysis of the fatigue life of all WECS components. Also, the modular format of the LIFE2 code permits the code to be revised, with minimal effort, to include additional analysis while maintaining its integrity. To illustrate the use of the codes, an example problem is presented. 10 refs.

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  6. Component fragilities. Data collection, analysis and interpretation

    SciTech Connect

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists.

  7. Principal component analysis for the forensic discrimination of black inkjet inks based on the Vis-NIR fibre optics reflection spectra.

    PubMed

    Gál, Lukáš; Oravec, Michal; Gemeiner, Pavol; Čeppan, Michal

    2015-12-01

    Nineteen black inkjet inks of six different brands were examined by fibre optics reflection spectroscopy in Visible and Near Infrared Region (Vis-NIR FORS) directly on paper with a view to achieving good resolution between them. These different inks were tested on nineteen different inkjet printers from three brands. Samples were obtained from prints by reflection probe. Processed reflection spectra in the range 500-1000 nm were used as samples in principal component analysis. Variability between spectra of the same ink obtained from different prints, as well as between spectra of square areas and lines was examined. For both spectra obtained from square areas and lines reference, Principal Component Analysis (PCA) models were created. According to these models, the inkjet inks were divided into clusters. PCA method is able to separate inks containing carbon black as main colorant from the other inks using other colorants. Some spectra were recorded from another piece of printer and used as validation samples. Spectra of validation samples were projected onto reference PCA models. According to position of validation samples in score plots it can be concluded that PCA based on Vis-NIR FORS can reliably differentiate inkjet inks which are included in the reference database. The presented method appears to be a suitable tool for forensic examination of questioned documents containing inkjet inks. Inkjet inks spectra were obtained without extraction or cutting sample with possibility to measure out of the laboratory. PMID:26448533

  8. [Analysis of major components in water based stamp pad inks and their imprints by ultra high performance liquid chromatography-mass spectrometry and gas chromatography-mass spectrometry].

    PubMed

    Zhang, Qing; Zou, Jixin; Shi, Gaojun; Zhang, Lijuan

    2010-12-01

    Ultra high performance liquid chromatography-mass spectrometry (UHPLC-MS) technology and gas chromatography-mass spectrometry (GC-MS) technology were used to qualitatively analyze the major components in water based stamp pad inks including major colorants and volatile components. After the samples were supersonically extracted and then centrifuged, UHPLC-MS was used to separate and identify the major colorants. A ZORBAX Eclipse Plus Phenyl-Hexyl (50 mm x 4.6 mm, 1.8 microm) column and 15 mmol/L ammonium acetate-acetonitrile were utilized for the separation and negative selected ion monitoring mode (SIM) was set for the MS analysis. An HP-INNOWAX (30 m x 0.25 mm, 0.25 microm) column was employed in the GC-MS analysis with the full-scan mode to determine the volatiles. This study demonstrated that the major colorants in the inks and their imprints were Acid Red R, Eosin Y and Pigment Red 112; and the major volatiles were glycerol, 1,2-propanediol, etc. The method is rapid and accurate. It also demonstrates that the method can meet the requirements for imprint determination in material evidence identification. The work provides a reliable tool for the categorization research in the forensic sciences. PMID:21438364

  9. PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Wang, Weichen

    2016-01-01

    This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374

  10. Energy component analysis of π interactions.

    PubMed

    Sherrill, C David

    2013-04-16

    Fundamental features of biomolecules, such as their structure, solvation, and crystal packing and even the docking of drugs, rely on noncovalent interactions. Theory can help elucidate the nature of these interactions, and energy component analysis reveals the contributions from the various intermolecular forces: electrostatics, London dispersion terms, induction (polarization), and short-range exchange-repulsion. Symmetry-adapted perturbation theory (SAPT) provides one method for this type of analysis. In this Account, we show several examples of how SAPT provides insight into the nature of noncovalent π-interactions. In cation-π interactions, the cation strongly polarizes electrons in π-orbitals, leading to substantially attractive induction terms. This polarization is so important that a cation and a benzene attract each other when placed in the same plane, even though a consideration of the electrostatic interactions alone would suggest otherwise. SAPT analysis can also support an understanding of substituent effects in π-π interactions. Trends in face-to-face sandwich benzene dimers cannot be understood solely in terms of electrostatic effects, especially for multiply substituted dimers, but SAPT analysis demonstrates the importance of London dispersion forces. Moreover, detailed SAPT studies also reveal the critical importance of charge penetration effects in π-stacking interactions. These effects arise in cases with substantial orbital overlap, such as in π-stacking in DNA or in crystal structures of π-conjugated materials. These charge penetration effects lead to attractive electrostatic terms where a simpler analysis based on atom-centered charges, electrostatic potential plots, or even distributed multipole analysis would incorrectly predict repulsive electrostatics. SAPT analysis of sandwich benzene, benzene-pyridine, and pyridine dimers indicates that dipole/induced-dipole terms present in benzene-pyridine but not in benzene dimer are relatively

  11. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  12. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  13. Principal Components Analysis of Population Admixture

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    With the availability of high-density genotype information, principal components analysis (PCA) is now routinely used to detect and quantify the genetic structure of populations in both population genetics and genetic epidemiology. An important issue is how to make appropriate and correct inferences about population relationships from the results of PCA, especially when admixed individuals are included in the analysis. We extend our recently developed theoretical formulation of PCA to allow for admixed populations. Because the sampled individuals are treated as features, our generalized formulation of PCA directly relates the pattern of the scatter plot of the top eigenvectors to the admixture proportions and parameters reflecting the population relationships, and thus can provide valuable guidance on how to properly interpret the results of PCA in practice. Using our formulation, we theoretically justify the diagnostic of two-way admixture. More importantly, our theoretical investigations based on the proposed formulation yield a diagnostic of multi-way admixture. For instance, we found that admixed individuals with three parental populations are distributed inside the triangle formed by their parental populations and divide the triangle into three smaller triangles whose areas have the same proportions in the big triangle as the corresponding admixture proportions. We tested and illustrated these findings using simulated data and data from HapMap III and the Human Genome Diversity Project. PMID:22808102

  14. Application of new methodologies based on design of experiments, independent component analysis and design space for robust optimization in liquid chromatography.

    PubMed

    Debrus, Benjamin; Lebrun, Pierre; Ceccato, Attilio; Caliaro, Gabriel; Rozet, Eric; Nistor, Iolanda; Oprean, Radu; Rupérez, Francisco J; Barbas, Coral; Boulanger, Bruno; Hubert, Philippe

    2011-04-01

    HPLC separations of an unknown sample mixture and a pharmaceutical formulation have been optimized using a recently developed chemometric methodology proposed by W. Dewé et al. in 2004 and improved by P. Lebrun et al. in 2008. This methodology is based on experimental designs which are used to model retention times of compounds of interest. Then, the prediction accuracy and the optimal separation robustness, including the uncertainty study, were evaluated. Finally, the design space (ICH Q8(R1) guideline) was computed as the probability for a criterion to lie in a selected range of acceptance. Furthermore, the chromatograms were automatically read. Peak detection and peak matching were carried out with a previously developed methodology using independent component analysis published by B. Debrus et al. in 2009. The present successful applications strengthen the high potential of these methodologies for the automated development of chromatographic methods. PMID:21458628

  15. Wavefront aberration measurement method for a hyper-NA lithographic projection lens based on principal component analysis of an aerial image.

    PubMed

    Zhu, Boer; Wang, Xiangzhao; Li, Sikun; Yan, Guanyong; Shen, Lina; Duan, Lifeng

    2016-04-20

    A wavefront aberration measurement method for a hyper-NA lithographic projection lens by use of an aerial image based on principal component analysis is proposed. Aerial images of the hyper-NA lithographic projection lens are expressed accurately by using polarized light and a vector imaging model, as well as by considering the polarization properties. As a result, the wavefront aberrations of the hyper-NA lithographic projection lens are measured accurately. The lithographic simulator PROLITH is used to validate the accuracies of the wavefront aberration measurement and analyze the impact of the polarization rotation of illumination on the accuracy of the wavefront aberration measurement, as well as the degree of polarized light and the sample interval of aerial images. The result shows that the proposed method can retrieve 33 terms of Zernike coefficients (Z5-Z37) with a maximum error of less than 0.00085λ. PMID:27140087

  16. Identification of Tea Storage Times by Linear Discrimination Analysis and Back-Propagation Neural Network Techniques Based on the Eigenvalues of Principal Components Analysis of E-Nose Sensor Signals

    PubMed Central

    Yu, Huichun; Wang, Yongwei; Wang, Jun

    2009-01-01

    An electronic nose (E-nose) was employed to detect the aroma of green tea after different storage times. Longjing green tea dry leaves, beverages and residues were detected with an E-nose, respectively. In order to decrease the data dimensionality and optimize the feature vector, the E-nose sensor response data were analyzed by principal components analysis (PCA) and the five main principal components values were extracted as the input for the discrimination analysis. The storage time (0, 60, 120, 180 and 240 days) was better discriminated by linear discrimination analysis (LDA) and was predicted by the back-propagation neural network (BPNN) method. The results showed that the discrimination and testing results based on the tea leaves were better than those based on tea beverages and tea residues. The mean errors of the tea leaf data were 9, 2.73, 3.93, 6.33 and 6.8 days, respectively. PMID:22408494

  17. The Component-Based Application for GAMESS

    SciTech Connect

    Peng, Fang

    2007-01-01

    GAMESS, a quantum chetnistry program for electronic structure calculations, has been freely shared by high-performance application scientists for over twenty years. It provides a rich set of functionalities and can be run on a variety of parallel platforms through a distributed data interface. While a chemistry computation is sophisticated and hard to develop, the resource sharing among different chemistry packages will accelerate the development of new computations and encourage the cooperation of scientists from universities and laboratories. Common Component Architecture (CCA) offers an enviromnent that allows scientific packages to dynamically interact with each other through components, which enable dynamic coupling of GAMESS with other chetnistry packages, such as MPQC and NWChem. Conceptually, a cotnputation can be constructed with "plug-and-play" components from scientific packages and require more than componentizing functions/subroutines of interest, especially for large-scale scientific packages with a long development history. In this research, we present our efforts to construct cotnponents for GAMESS that conform to the CCA specification. The goal is to enable the fine-grained interoperability between three quantum chemistry programs, GAMESS, MPQC and NWChem, via components. We focus on one of the three packages, GAMESS; delineate the structure of GAMESS computations, followed by our approaches to its component development. Then we use GAMESS as the driver to interoperate integral components from the other tw"o packages, arid show the solutions for interoperability problems along with preliminary results. To justify the versatility of the design, the Tuning and Analysis Utility (TAU) components have been coupled with GAMESS and its components, so that the performance of GAMESS and its components may be analyzed for a wide range of systetn parameters.

  18. The Analysis of Multitrait-Multimethod Matrices via Constrained Components Analysis.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1996-01-01

    An approach to the analysis of multitrait-multimethod matrices is proposed in which improper solutions are ruled out and convergence is guaranteed. The approach, based on constrained variants of components analysis, provides component scores that can relate components to external variables. It is illustrated through simulated and empirical data.…

  19. Identification of More Feasible MicroRNA–mRNA Interactions within Multiple Cancers Using Principal Component Analysis Based Unsupervised Feature Extraction

    PubMed Central

    Taguchi, Y-h.

    2016-01-01

    MicroRNA(miRNA)–mRNA interactions are important for understanding many biological processes, including development, differentiation and disease progression, but their identification is highly context-dependent. When computationally derived from sequence information alone, the identification should be verified by integrated analyses of mRNA and miRNA expression. The drawback of this strategy is the vast number of identified interactions, which prevents an experimental or detailed investigation of each pair. In this paper, we overcome this difficulty by the recently proposed principal component analysis (PCA)-based unsupervised feature extraction (FE), which reduces the number of identified miRNA–mRNA interactions that properly discriminate between patients and healthy controls without losing biological feasibility. The approach is applied to six cancers: hepatocellular carcinoma, non-small cell lung cancer, esophageal squamous cell carcinoma, prostate cancer, colorectal/colon cancer and breast cancer. In PCA-based unsupervised FE, the significance does not depend on the number of samples (as in the standard case) but on the number of features, which approximates the number of miRNAs/mRNAs. To our knowledge, we have newly identified miRNA–mRNA interactions in multiple cancers based on a single common (universal) criterion. Moreover, the number of identified interactions was sufficiently small to be sequentially curated by literature searches. PMID:27171078

  20. Identification of More Feasible MicroRNA-mRNA Interactions within Multiple Cancers Using Principal Component Analysis Based Unsupervised Feature Extraction.

    PubMed

    Taguchi, Y-H

    2016-01-01

    MicroRNA(miRNA)-mRNA interactions are important for understanding many biological processes, including development, differentiation and disease progression, but their identification is highly context-dependent. When computationally derived from sequence information alone, the identification should be verified by integrated analyses of mRNA and miRNA expression. The drawback of this strategy is the vast number of identified interactions, which prevents an experimental or detailed investigation of each pair. In this paper, we overcome this difficulty by the recently proposed principal component analysis (PCA)-based unsupervised feature extraction (FE), which reduces the number of identified miRNA-mRNA interactions that properly discriminate between patients and healthy controls without losing biological feasibility. The approach is applied to six cancers: hepatocellular carcinoma, non-small cell lung cancer, esophageal squamous cell carcinoma, prostate cancer, colorectal/colon cancer and breast cancer. In PCA-based unsupervised FE, the significance does not depend on the number of samples (as in the standard case) but on the number of features, which approximates the number of miRNAs/mRNAs. To our knowledge, we have newly identified miRNA-mRNA interactions in multiple cancers based on a single common (universal) criterion. Moreover, the number of identified interactions was sufficiently small to be sequentially curated by literature searches. PMID:27171078

  1. Principal component analysis of phenolic acid spectra

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  2. Principal component analysis implementation in Java

    NASA Astrophysics Data System (ADS)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  3. Advanced Placement: Model Policy Components. Policy Analysis

    ERIC Educational Resources Information Center

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  4. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  5. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  6. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  7. Robust Correlated and Individual Component Analysis.

    PubMed

    Panagakis, Yannis; Nicolaou, Mihalis A; Zafeiriou, Stefanos; Pantic, Maja

    2016-08-01

    Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field. PMID:26552077

  8. Independent component analysis of parameterized ECG signals.

    PubMed

    Tanskanen, Jarno M A; Viik, Jari J; Hyttinen, Jari A K

    2006-01-01

    Independent component analysis (ICA) of measured signals yields the independent sources, given certain fulfilled requirements. Properly parameterized signals provide a better view to the considered system aspects, while reducing the amount of data. It is little acknowledged that appropriately parameterized signals may be subjected to ICA, yielding independent components (ICs) displaying more clearly the investigated properties of the sources. In this paper, we propose ICA of parameterized signals, and demonstrate the concept with ICA of ST and R parameterizations of electrocardiogram (ECG) signals from ECG exercise test measurements from two coronary artery disease (CAD) patients. PMID:17945912

  9. ARTICLES: Laser spectrochromatographic analysis of petroleum components

    NASA Astrophysics Data System (ADS)

    Korobeĭnik, G. S.; Letokhov, V. S.; Montanari, S. G.; Tumanova, L. M.

    1985-01-01

    A system combining a gas chromatograph and a laser optoacoustic spectrometer (with a CO2 laser and means for fast frequency scanning) was used to investigate model hydrocarbon mixtures, as well as some real objects in the form of benzine fractions of petroleum oil. The fast scanning regime was used to record optoacoustic spectra of hydrocarbons (in the range 9.2-10.8μ) during the travel time (1-10 sec) of the individual components of a mixture through an optoacoustic cell in the course of chromatrographic separation of these components. The spectra were used to carry out a group hydrocarbon analysis of benzine fractions of petroleum oil from various locations. The proposed method was relatively fast and was characterized by a good ability for identification of various components, compared with the usually employed method such as gas-liquid capillary chromatography.

  10. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  11. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  12. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  13. Failure analysis of aluminum alloy components

    NASA Technical Reports Server (NTRS)

    Johari, O.; Corvin, I.; Staschke, J.

    1973-01-01

    Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.

  14. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  15. Principal components analysis of Jupiter VIMS spectra

    USGS Publications Warehouse

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  16. WE-G-18C-09: Separating Perfusion and Diffusion Components From Diffusion Weighted MRI of Rectum Tumors Based On Intravoxel Incoherent Motion (IVIM) Analysis

    SciTech Connect

    Tyagi, N; Wengler, K; Mazaheri, Y; Hunt, M; Deasy, J; Gollub, M

    2014-06-15

    Purpose: Pseudodiffusion arises from the microcirculation of blood in the randomly oriented capillary network and contributes to the signal decay acquired using a multi-b value diffusion weighted (DW)-MRI sequence. This effect is more significant at low b-values and should be properly accounted for in apparent diffusion coefficient (ADC) calculations. The purpose of this study was to separate perfusion and diffusion component based on a biexponential and a segmented monoexponential model using IVIM analysis Methods. The signal attenuation is modeled as S(b) = S0[(1−f)exp(−bD) + fexp(−bD*)]. Fitting the biexponetial decay leads to the quantification of D, the true diffusion coefficient, D*, the pseudodiffusion coefficient, and f, the perfusion fraction. A nonlinear least squares fit and two segmented monoexponential models were used to derive the values for D, D*,‘and f. In the segmented approach b = 200 s/mm{sup 2} was used as the cut-off value for calculation of D. DW-MRI's of a rectum cancer patient were acquired before chemotherapy, before radiation therapy (RT), and 4 weeks into RT and were investigated as an example case. Results: Mean ADC for the tumor drawn on the DWI cases was 0.93, 1.0 and 1.13 10{sup −3}×mm{sup 2}/s before chemotherapy, before RT and 4 weeks into RT. The mean (D.10{sup −3} × mm{sup 2}/s, D* 10{sup −3} × mm{sup 2}/s, and f %) based on biexponential fit was (0.67, 18.6, and 27.2%), (0.72, 17.7, and 28.9%) and (0.83,15.1, and 30.7%) at these time points. The mean (D, D* f) based on segmented fit was (0.72, 10.5, and 12.1%), (0.72, 8.2, and 17.4%) and (.82, 8.1, 16.5%) Conclusion: ADC values are typically higher than true diffusion coefficients. For tumors with significant perfusion effect, ADC should be analyzed at higher b-values or separated from the perfusion component. Biexponential fit overestimates the perfusion fraction because of increased sensitivity to noise at low b-values.

  17. Spectral Components Analysis of Diffuse Emission Processes

    SciTech Connect

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  18. Imaging Brain Dynamics Using Independent Component Analysis

    PubMed Central

    Jung, Tzyy-Ping; Makeig, Scott; McKeown, Martin J.; Bell, Anthony J.; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain. PMID:20824156

  19. Automated resolution of chromatographic signals by independent component analysis-orthogonal signal deconvolution in comprehensive gas chromatography/mass spectrometry-based metabolomics.

    PubMed

    Domingo-Almenara, Xavier; Perera, Alexandre; Ramírez, Noelia; Brezmes, Jesus

    2016-07-01

    Comprehensive gas chromatography-mass spectrometry (GC×GC-MS) provides a different perspective in metabolomics profiling of samples. However, algorithms for GC×GC-MS data processing are needed in order to automatically process the data and extract the purest information about the compounds appearing in complex biological samples. This study shows the capability of independent component analysis-orthogonal signal deconvolution (ICA-OSD), an algorithm based on blind source separation and distributed in an R package called osd, to extract the spectra of the compounds appearing in GC×GC-MS chromatograms in an automated manner. We studied the performance of ICA-OSD by the quantification of 38 metabolites through a set of 20 Jurkat cell samples analyzed by GC×GC-MS. The quantification by ICA-OSD was compared with a supervised quantification by selective ions, and most of the R(2) coefficients of determination were in good agreement (R(2)>0.90) while up to 24 cases exhibited an excellent linear relation (R(2)>0.95). We concluded that ICA-OSD can be used to resolve co-eluted compounds in GC×GC-MS. PMID:27208528

  20. Impact of parameter fluctuations on the performance of ethanol precipitation in production of Re Du Ning Injections, based on HPLC fingerprints and principal component analysis.

    PubMed

    Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei

    2016-01-01

    The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products. PMID:26850350

  1. Collagen-based proteinaceous binder-pigment interaction study under UV ageing conditions by MALDI-TOF-MS and principal component analysis.

    PubMed

    Romero-Pastor, Julia; Navas, Natalia; Kuckova, Stepanka; Rodríguez-Navarro, Alejandro; Cardell, Carolina

    2012-03-01

    This study focuses on acquiring information on the degradation process of proteinaceous binders due to ultra violet (UV) radiation and possible interactions owing to the presence of historical mineral pigments. With this aim, three different paint model samples were prepared according to medieval recipes, using rabbit glue as proteinaceus binders. One of these model samples contained only the binder, and the other two were prepared by mixing each of the pigments (cinnabar or azurite) with the binder (glue tempera model samples). The model samples were studied by applying Principal Component Analysis (PCA) to their mass spectra obtained with Matrix-Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF-MS). The complementary use of Fourier Transform Infrared Spectroscopy to study conformational changes of secondary structure of the proteinaceous binder is also proposed. Ageing effects on the model samples after up to 3000 h of UV irradiation were periodically analyzed by the proposed approach. PCA on MS data proved capable of identifying significant changes in the model samples, and the results suggested different aging behavior based on the pigment present. This research represents the first attempt to use this approach (PCA on MALDI-TOF-MS data) in the field of Cultural Heritage and demonstrates the potential benefits in the study of proteinaceous artistic materials for purposes of conservation and restoration. PMID:22431458

  2. Principal component analysis for designed experiments

    PubMed Central

    2015-01-01

    Background Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. Results The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the

  3. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  4. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  5. On-board energy management for high-speed aerospace vehicles: System and component-level energy-based optimization and analysis

    NASA Astrophysics Data System (ADS)

    Taylor, Trent Matthew

    This dissertation addresses in detail three main topics for advancing the state-of-the-art in hypersonic aerospace systems: (1) the development of a synergistic method based on entropy generation in order to analyze, evaluate, and optimize vehicle performance, (2) the development and analysis of innovative unconventional flow-control methods for increasing vehicle performance utilizing entropy generation as a fundamental descriptor and predictor of performance, and (3) an investigation of issues arising when evaluating (predicting) actual flight vehicle performance using ground test facilities. Vehicle performance is analyzed beginning from fundamental considerations involving fluid and thermodynamic balance relationships. The results enable the use of entropy generation as the true "common currency" (single loss parameter) for systematic and consistent evaluation of performance losses across the vehicle as an integrated system. Innovative flow control methods are modeled using state of the art CFD codes in which the flow is energized in targeted local zones with emphasis on shock wave modification. Substantial drag reductions are observed such that drag can decrease to 25% of the baseline. Full vehicle studies are then conducted by comparing traditional and flow-controlled designs and very similar axial force is found with an accompanying increase in lift for the flow-control design to account for on-board energy-addition components. Finally, a full engine flowpath configuration is designed for computational studies of ground test performance versus actual flight performance with emphasis on understanding the effect of ground-based vitiate (test contaminant). It is observed that the presence of vitiate in the test medium can also have a significant first-order effect on ignition delay as well as the thermodynamic response to a given heat release in the fuel.

  6. Sparse principal component analysis in cancer research

    PubMed Central

    Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa

    2015-01-01

    A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835

  7. [Rapid identification of crude and sweated dipsaci radix based on near-infrared spectroscopy combined with principal component analysis-mahalanobis distance].

    PubMed

    Du, Wei-Feng; Jia, Yong-Qiang; Jiang, Dong-Jing; Zhang, Hao

    2014-12-01

    In order to discriminate the crude and sweated Dipsaci Radix correctly and rapidly, the crude and sweated Dipsaci Radix were scanned by the NIR spectrometer, and an identifying model was developed by near infrared spectroscopy combined with principal component-Mahalanobis distance pattern recognition method. The pretreated spectra data of 129 crude samples and 86 sweated ones were analyzed through principal component analysis (PCA). The identifying model was developed by choosing the spectrum for 9 881.46-4 119.20 cm(-1) and "SNV + spectrum + S-G" to the original spectral preprocessing with 14 principal components, and then was verified by prediction set, identifying with 100% accuracy. The rapid identification model of the crude and sweated Dipsaci Radix by NIR is feasible and efficient, and could be used as an assistant means for identifying the crude and sweated Dipsaci Radix. PMID:25911809

  8. Structural Analysis Methods Development for Turbine Hot Section Components

    NASA Technical Reports Server (NTRS)

    Thompson, Robert L.

    1988-01-01

    The structural analysis technologies and activities of the NASA Lewis Research Center's gas turbine engine Hot Section Technology (HOST) program are summarized. The technologies synergistically developed and validated include: time-varying thermal/mechanical load models; component-specific automated geometric modeling and solution strategy capabilities; advanced inelastic analysis methods; inelastic constitutive models; high-temperature experimental techniques and experiments; and nonlinear structural analysis codes. Features of the program that incorporate the new technologies and their application to hot section component analysis and design are described. Improved and, in some cases, first-time 3-D nonlinear structural analyses of hot section components of isotropic and anisotropic nickel-base superalloys are presented.

  9. Practical Issues in Component Aging Analysis

    SciTech Connect

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  10. BBH Classification Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Cadonati, Laura; Clark, James; Day, Brian; Jeng, Ik Siong; Lombardi, Alexander; London, Lionel; Mangini, Nicholas; Logue, Josh

    2015-04-01

    Binary black holes will inspiral, merge and ringdown in the LIGO/VIRGO band for an interesting range of total masses. We present an update on our approach of using Principal Component Analysis to build models of NR BBH waveforms that focus on the merger for generic BBH signals. These models are intended to be used to conduct coarse parameter estimation for gravitational wave burst candidate events. The proposed benefit is a fast, optimized catalog that classifies bulk features in the signal. NSFPHY-0955773, 0955825, SUPA and STFC UK. Simulations by NSF XSEDE PHY120016 and PHY090030.

  11. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  12. Analysis of exogenous components of mortality risks.

    PubMed

    Blinkin, V L

    1998-04-01

    A new technique for deriving exogenous components of mortality risks from national vital statistics has been developed. Each observed death rate Dij (where i corresponds to calendar time (year or interval of years) and j denotes the number of corresponding age group) was represented as Dij = Aj + BiCj, and unknown quantities Aj, Bi, and Cj were estimated by a special procedure using the least-squares principle. The coefficients of variation do not exceed 10%. It is shown that the term Aj can be interpreted as the endogenous and the second term BiCj as the exogenous components of the death rate. The aggregate of endogenous components Aj can be described by a regression function, corresponding to the Gompertz-Makeham law, A(tau) = gamma + beta x e alpha tau, where gamma, beta, and alpha are constants, tau is age, A(tau) [symbol: see text] tau = tau j identical to A(tau j) identical to Aj and tau j is the value of age tau in jth age group. The coefficients of variation for such a representation does not exceed 4%. An analysis of exogenous risk levels in the Moscow and Russian populations during 1980-1995 shows that since 1992 all components of exogenous risk in the Moscow population had been increasing up to 1994. The greatest contribution to the total level of exogenous risk was lethal diseases, and their death rate was 387 deaths per 100,000 persons in 1994, i.e., 61.9% of all deaths. The dynamics of exogenous mortality risk change during 1990-1994 in the Moscow population and in the Russian population without Moscow had been identical: the risk had been increasing and its value in the Russian population had been higher than that in the Moscow population. PMID:9637078

  13. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  14. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle

  15. Analysis of Variance Components for Genetic Markers with Unphased Genotypes

    PubMed Central

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297

  16. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-02-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in southern Ontario (ON), Canada, were used to evaluate Environment Canada's regional chemical transport model predictions of primary organic aerosol (POA). Environment Canada's operational numerical weather prediction model and the 2006 Canadian and 2005 US national emissions inventories were used as input to the chemical transport model (named AURAMS). Particle-component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. Co-located carbon monoxide (CO), PM2.5 black carbon (BC), and PM1 SO4 measurements were also used for evaluation and interpretation, permitting a detailed diagnostic model evaluation. At the urban site, good agreement was observed for the comparison of daytime campaign PM1 POA and HOA mean values: 1.1 μg m-3 vs. 1.2 μg m-3, respectively. However, a POA overprediction was evident on calm nights due to an overly-stable model surface layer. Biases in model POA predictions trended from positive to negative with increasing HOA values. This trend has several possible explanations, including (1) underweighting of urban locations in particulate matter (PM) spatial surrogate fields, (2) overly-coarse model grid spacing for resolving urban-scale sources, and (3) lack of a model particle POA evaporation process during dilution of vehicular POA tail-pipe emissions to urban scales. Furthermore, a trend in POA bias was observed at the urban site as a function of the BC/HOA ratio, suggesting a possible association of POA underprediction for diesel combustion sources. For several time periods, POA overprediction was also observed for sulphate-rich plumes, suggesting that our model POA fractions for the PM2.5 chemical speciation profiles may be too high for these point sources. At the rural Harrow site

  17. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    NASA Astrophysics Data System (ADS)

    Stroud, C. A.; Moran, M. D.; Makar, P. A.; Gong, S.; Gong, W.; Zhang, J.; Slowik, J. G.; Abbatt, J. P. D.; Lu, G.; Brook, J. R.; Mihele, C.; Li, Q.; Sills, D.; Strawbridge, K. B.; McGuire, M. L.; Evans, G. J.

    2012-09-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two rural sites (Harrow and Bear Creek, ON) to derive hydrocarbon-like organic aerosol (HOA) factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA). Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species. At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m-3 and 1.2 μg m-3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer. This model behaviour can be explained by a combination of model under

  18. Seismic component fragility data base for IPEEE

    SciTech Connect

    Bandyopadhyay, K.; Hofmayer, C.

    1990-01-01

    Seismic probabilistic risk assessment or a seismic margin study will require a reliable data base of seismic fragility of various equipment classes. Brookhaven National Laboratory (BNL) has selected a group of equipment and generically evaluated the seismic fragility of each equipment class by use of existing test data. This paper briefly discusses the evaluation methodology and the fragility results. The fragility analysis results when used in the Individual Plant Examination for External Events (IPEEE) Program for nuclear power plants are expected to provide insights into seismic vulnerabilities of equipment for earthquakes beyond the design basis. 3 refs., 1 fig., 1 tab.

  19. A structured overview of simultaneous component based data integration

    PubMed Central

    Van Deun, Katrijn; Smilde, Age K; van der Werf, Mariët J; Kiers, Henk AL; Van Mechelen, Iven

    2009-01-01

    Background Data integration is currently one of the main challenges in the biomedical sciences. Often different pieces of information are gathered on the same set of entities (e.g., tissues, culture samples, biomolecules) with the different pieces stemming, for example, from different measurement techniques. This implies that more and more data appear that consist of two or more data arrays that have a shared mode. An integrative analysis of such coupled data should be based on a simultaneous analysis of all data arrays. In this respect, the family of simultaneous component methods (e.g., SUM-PCA, unrestricted PCovR, MFA, STATIS, and SCA-P) is a natural choice. Yet, different simultaneous component methods may lead to quite different results. Results We offer a structured overview of simultaneous component methods that frames them in a principal components setting such that both the common core of the methods and the specific elements with regard to which they differ are highlighted. An overview of principles is given that may guide the data analyst in choosing an appropriate simultaneous component method. Several theoretical and practical issues are illustrated with an empirical example on metabolomics data for Escherichia coli as obtained with different analytical chemical measurement methods. Conclusion Of the aspects in which the simultaneous component methods differ, pre-processing and weighting are consequential. Especially, the type of weighting of the different matrices is essential for simultaneous component analysis. These types are shown to be linked to different specifications of the idea of a fair integration of the different coupled arrays. PMID:19671149

  20. Quantitative Analysis of PMLA Nanoconjugate Components after Backbone Cleavage

    PubMed Central

    Ding, Hui; Patil, Rameshwar; Portilla-Arias, Jose; Black, Keith L.; Ljubimova, Julia Y.; Holler, Eggehard

    2015-01-01

    Multifunctional polymer nanoconjugates containing multiple components show great promise in cancer therapy, but in most cases complete analysis of each component is difficult. Polymalic acid (PMLA) based nanoconjugates have demonstrated successful brain and breast cancer treatment. They consist of multiple components including targeting antibodies, Morpholino antisense oligonucleotides (AONs), and endosome escape moieties. The component analysis of PMLA nanoconjugates is extremely difficult using conventional spectrometry and HPLC method. Taking advantage of the nature of polyester of PMLA, which can be cleaved by ammonium hydroxide, we describe a method to analyze the content of antibody and AON within nanoconjugates simultaneously using SEC-HPLC by selectively cleaving the PMLA backbone. The selected cleavage conditions only degrade PMLA without affecting the integrity and biological activity of the antibody. Although the amount of antibody could also be determined using the bicinchoninic acid (BCA) method, our selective cleavage method gives more reliable results and is more powerful. Our approach provides a new direction for the component analysis of polymer nanoconjugates and nanoparticles. PMID:25894227

  1. Point-process principal components analysis via geometric optimization.

    PubMed

    Solo, Victor; Pasha, Syed Ahmed

    2013-01-01

    There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106

  2. Meta-Analysis of Mathematic Basic-Fact Fluency Interventions: A Component Analysis

    ERIC Educational Resources Information Center

    Codding, Robin S.; Burns, Matthew K.; Lukito, Gracia

    2011-01-01

    Mathematics fluency is a critical component of mathematics learning yet few attempts have been made to synthesize this research base. Seventeen single-case design studies with 55 participants were reviewed using meta-analytic procedures. A component analysis of practice elements was conducted and treatment intensity and feasibility were examined.…

  3. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  4. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  5. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  6. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with

  7. Application of Principal Component Analysis to EUV multilayer defect printing

    NASA Astrophysics Data System (ADS)

    Xu, Dongbo; Evanschitzky, Peter; Erdmann, Andreas

    2015-09-01

    This paper proposes a new method for the characterization of multilayer defects on EUV masks. To reconstruct the defect geometry parameters from the intensity and phase of a defect, the Principal Component Analysis (PCA) is employed to parametrize the intensity and phase distributions into principal component coefficients. In order to construct the base functions of PCA, a combination of a reference multilayer defect and appropriate pupil filters is introduced to obtain the designed sets of intensity and phase distributions. Finally, an Artificial Neural Network (ANN) is applied to correlate the principal component coefficients of the intensity and the phase of the defect with the defect geometry parameters and to reconstruct the unknown defect geometry parameters.

  8. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  9. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  10. Derivative component analysis for mass spectral serum proteomic profiles

    PubMed Central

    2014-01-01

    Background As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. Methods In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Results Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based

  11. Component design bases - A template approach

    SciTech Connect

    Pabst, L.F. ); Strickland, K.M. )

    1991-01-01

    A well-documented nuclear plant design basis can enhance plant safety and availability. Older plants, however, often lack historical evidence of the original design intent, particularly for individual components. Most plant documentation describes the actual design (what is) rather than the bounding limits of the design. Without knowledge of these design limits, information from system descriptions and equipment specifications is often interpreted as inviolate design requirements. Such interpretations may lead to unnecessary design conservatism in plant modifications and unnecessary restrictions on plant operation. In 1986, Florida Power and Light Company's (FP and L's) Turkey Point plant embarked on one of the first design basis reconstitution programs in the United States to catalog the true design requirements. As the program developed, design basis users expressed a need for additional information at the component level. This paper outlines a structured (template) approach to develop useful component design basis information (including the WHYs behind the design).

  12. CHEMICAL ANALYSIS METHODS FOR ATMOSPHERIC AEROSOL COMPONENTS

    EPA Science Inventory

    This chapter surveys the analytical techniques used to determine the concentrations of aerosol mass and its chemical components. The techniques surveyed include mass, major ions (sulfate, nitrate, ammonium), organic carbon, elemental carbon, and trace elements. As reported in...

  13. Analysis of failed nuclear plant components

    NASA Astrophysics Data System (ADS)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  14. Effect of the components' interface on the synthesis of methanol over Cu/ZnO from CO2/H2: a microkinetic analysis based on DFT + U calculations.

    PubMed

    Tang, Qian-Lin; Zou, Wen-Tian; Huang, Run-Kun; Wang, Qi; Duan, Xiao-Xuan

    2015-03-21

    The elucidation of chemical reactions occurring on composite systems (e.g., copper (Cu)/zincite (ZnO)) from first principles is a challenging task because of their very large sizes and complicated equilibrium geometries. By combining the density functional theory plus U (DFT + U) method with microkinetic modeling, the present study has investigated the role of the phase boundary in CO2 hydrogenation to methanol over Cu/ZnO. The absence of hydrogenation locations created by the interface between the two catalyst components was revealed based on the calculated turnover frequency under realistic conditions, in which the importance of interfacial copper to provide spillover hydrogen for remote Cu(111) sites was stressed. Coupled with the fact that methanol production on the binary catalyst was recently believed to predominantly involve the bulk metallic surface, the spillover of interface hydrogen atoms onto Cu(111) facets facilitates the production process. The cooperative influence of the two different kinds of copper sites can be rationalized applying the Brönsted-Evans-Polanyi (BEP) relationship and allows us to find that the catalytic activity of ZnO-supported Cu catalysts is of volcano type with decrease in the particle size. Our results here may have useful implications in the future design of new Cu/ZnO-based materials for CO2 transformation to methanol. PMID:25697118

  15. Using surface electromyography (SEMG) to classify low back pain based on lifting capacity evaluation with principal component analysis neural network method.

    PubMed

    Hung, Chia-Chun; Shen, Tsu-Wang; Liang, Chung-Chao; Wu, Wen-Tien

    2014-01-01

    Low back pain (LBP) is a leading cause of disability. The population with low back pain is continuously growing in the recent years. This study tries to distinguish LBP patients with healthy subjects by using the objective surface electromyography (SEMG) as a quantitative score for clinical evaluations. There are 26 healthy and 26 low back pain subjects who involved in this research. They lifted different weights by static and dynamic lifting process. Multiple features are extracted from the raw SEMG data, including energy and frequency indexes. Moreover, false discovery rate (FDR) omitted the false positive features. Then, a principal component analysis neural network (PCANN) was used for classifications. The results showed the features with different loadings (including 30%, and 50% loading) on lifting which can be used for distinguishing healthy and back pain subjects. By using PCANN method, more than 80% accuracies are achieved when different lifting weights were applied. Moreover, it is correlated between some EMG features and clinical scales, on exertion, fatigue, and pain. This technology can be potentially used for the future researches as a computer-aid diagnosis tool of LBP evaluation. PMID:25569886

  16. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children

    PubMed Central

    Wassenburg, Stephanie I.; de Koning, Björn B.; de Vries, Meinou H.; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  17. Does the Component Processes Task Assess Text-Based Inferences Important for Reading Comprehension? A Path Analysis in Primary School Children.

    PubMed

    Wassenburg, Stephanie I; de Koning, Björn B; de Vries, Meinou H; van der Schoot, Menno

    2016-01-01

    Using a component processes task (CPT) that differentiates between higher-level cognitive processes of reading comprehension provides important advantages over commonly used general reading comprehension assessments. The present study contributes to further development of the CPT by evaluating the relative contributions of its components (text memory, text inferencing, and knowledge integration) and working memory to general reading comprehension within a single study using path analyses. Participants were 173 third- and fourth-grade children. As hypothesized, knowledge integration was the only component of the CPT that directly contributed to reading comprehension, indicating that the text-inferencing component did not assess inferential processes related to reading comprehension. Working memory was a significant predictor of reading comprehension over and above the component processes. Future research should focus on finding ways to ensure that the text-inferencing component taps into processes important for reading comprehension. PMID:27378989

  18. GPR anomaly detection with robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.

    2015-05-01

    This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.

  19. EXAFS and principal component analysis : a new shell game.

    SciTech Connect

    Wasserman, S.

    1998-10-28

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions.

  20. Components based on magneto-elastic phenomena

    NASA Astrophysics Data System (ADS)

    Deckert, J.

    1982-01-01

    Magnetoelastic materials and their applications in magnetostrictive oscillators, electromechanical filters and delay lines are discussed. The properties of commercial magnetoelastic materials are tabulated. The performance of magnetostrictive, piezoelectric, and quartz oscillators are compared. The development of low cost quartz and piezoelectric materials reduces the significance of magnetostrictive components. Uses are restricted to springs and diaphragms, e.g. in clocks or control engineering, temperature independent resonators, and vibrational systems.

  1. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  2. Multi-component joint analysis of surface waves

    NASA Astrophysics Data System (ADS)

    Dal Moro, Giancarlo; Moura, Rui Miguel Marques; Moustafa, Sayed S. R.

    2015-08-01

    Propagation of surface waves can occur with complex energy distribution amongst the various modes. It is shown that even simple VS (shear-wave velocity) profiles can generate velocity spectra that, because of a complex mode excitation, can be quite difficult to interpret in terms of modal dispersion curves. In some cases, Rayleigh waves show relevant differences depending on the considered component (radial or vertical) and the kind of source (vertical impact or explosive). Contrary to several simplistic assumptions often proposed, it is shown, both via synthetic and field datasets, that the fundamental mode of Rayleigh waves can be almost completely absent. This sort of evidence demonstrates the importance of a multi-component analysis capable of providing the necessary elements to properly interpret the data and adequately constrain the subsurface model. It is purposely shown, also through the sole use of horizontal geophones, how it can be possible to efficiently and quickly acquire both Love and Rayleigh (radial-component) waves. The presented field dataset reports a case where Rayleigh waves (both their vertical and radial components) appear largely dominated by higher modes with little or no evidence of the fundamental mode. The joint inversion of the radial and vertical components of Rayleigh waves jointly with Love waves is performed by adopting a multi-objective inversion scheme based on the computation of synthetic seismograms for the three considered components and the minimization of the whole velocity spectra misfits (Full Velocity Spectra - FVS - inversion). Such a FVS multi-component joint inversion can better handle complex velocity spectra thus providing a more robust subsurface model not affected by erroneous velocity spectra interpretations and non-uniqueness of the solution.

  3. Primary component analysis method and reduction of seismicity parameters

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Ma, Qin-Zhong; Lin, Ming-Zhou; Wu, Geng-Feng; Wu, Shao-Chun

    2005-09-01

    In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (M l≥3.0), b-value, η-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However, the synthesis parameter W showed obvious anomalies before 13 earthquakes (M S≥5.8) occurred in North China, which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.

  4. Multibody model reduction by component mode synthesis and component cost analysis

    NASA Technical Reports Server (NTRS)

    Spanos, J. T.; Mingori, D. L.

    1990-01-01

    The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.

  5. Fast independent component analysis algorithm for quaternion valued signals.

    PubMed

    Javidi, Soroush; Took, Clive Cheong; Mandic, Danilo P

    2011-12-01

    An extension of the fast independent component analysis algorithm is proposed for the blind separation of both Q-proper and Q-improper quaternion-valued signals. This is achieved by maximizing a negentropy-based cost function, and is derived rigorously using the recently developed HR calculus in order to implement Newton optimization in the augmented quaternion statistics framework. It is shown that the use of augmented statistics and the associated widely linear modeling provides theoretical and practical advantages when dealing with general quaternion signals with noncircular (rotation-dependent) distributions. Simulations using both benchmark and real-world quaternion-valued signals support the approach. PMID:22027374

  6. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  7. Columbia River Component Data Gap Analysis

    SciTech Connect

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  8. Si-based RF MEMS components.

    SciTech Connect

    Stevens, James E.; Nordquist, Christopher Daniel; Baker, Michael Sean; Fleming, James Grant; Stewart, Harold D.; Dyck, Christopher William

    2005-01-01

    Radio frequency microelectromechanical systems (RF MEMS) are an enabling technology for next-generation communications and radar systems in both military and commercial sectors. RF MEMS-based reconfigurable circuits outperform solid-state circuits in terms of insertion loss, linearity, and static power consumption and are advantageous in applications where high signal power and nanosecond switching speeds are not required. We have demonstrated a number of RF MEMS switches on high-resistivity silicon (high-R Si) that were fabricated by leveraging the volume manufacturing processes available in the Microelectronics Development Laboratory (MDL), a Class-1, radiation-hardened CMOS manufacturing facility. We describe novel tungsten and aluminum-based processes, and present results of switches developed in each of these processes. Series and shunt ohmic switches and shunt capacitive switches were successfully demonstrated. The implications of fabricating on high-R Si and suggested future directions for developing low-loss RF MEMS-based circuits are also discussed.

  9. Core Bioactive Components Promoting Blood Circulation in the Traditional Chinese Medicine Compound Xueshuantong Capsule (CXC) Based on the Relevance Analysis between Chemical HPLC Fingerprint and In Vivo Biological Effects

    PubMed Central

    Liu, Hong; Liang, Jie-ping; Li, Pei-bo; Peng, Wei; Peng, Yao-yao; Zhang, Gao-min; Xie, Cheng-shi; Long, Chao-feng; Su, Wei-wei

    2014-01-01

    Compound xueshuantong capsule (CXC) is an oral traditional Chinese herbal formula (CHF) comprised of Panax notoginseng (PN), Radix astragali (RA), Salvia miltiorrhizae (SM), and Radix scrophulariaceae (RS). The present investigation was designed to explore the core bioactive components promoting blood circulation in CXC using high-performance liquid chromatography (HPLC) and animal studies. CXC samples were prepared with different proportions of the 4 herbs according to a four-factor, nine-level uniform design. CXC samples were assessed with HPLC, which identified 21 components. For the animal experiments, rats were soaked in ice water during the time interval between two adrenaline hydrochloride injections to reduce blood circulation. We assessed whole-blood viscosity (WBV), erythrocyte aggregation and red corpuscle electrophoresis indices (EAI and RCEI, respectively), plasma viscosity (PV), maximum platelet aggregation rate (MPAR), activated partial thromboplastin time (APTT), and prothrombin time (PT). Based on the hypothesis that CXC sample effects varied with differences in components, we performed grey relational analysis (GRA), principal component analysis (PCA), ridge regression (RR), and radial basis function (RBF) to evaluate the contribution of each identified component. Our results indicate that panaxytriol, ginsenoside Rb1, angoroside C, protocatechualdehyde, ginsenoside Rd, and calycosin-7-O-β-D-glucoside are the core bioactive components, and that they might play different roles in the alleviation of circulation dysfunction. Panaxytriol and ginsenoside Rb1 had close relevance to red blood cell (RBC) aggregation, angoroside C was related to platelet aggregation, protocatechualdehyde was involved in intrinsic clotting activity, ginsenoside Rd affected RBC deformability and plasma proteins, and calycosin-7-O-β-D-glucoside influenced extrinsic clotting activity. This study indicates that angoroside C, calycosin-7-O-β-D-glucoside, panaxytriol, and

  10. Interaction Analysis of a Two-Component System Using Nanodiscs

    PubMed Central

    Hörnschemeyer, Patrick; Liss, Viktoria; Heermann, Ralf; Jung, Kirsten; Hunke, Sabine

    2016-01-01

    Two-component systems are the major means by which bacteria couple adaptation to environmental changes. All utilize a phosphorylation cascade from a histidine kinase to a response regulator, and some also employ an accessory protein. The system-wide signaling fidelity of two-component systems is based on preferential binding between the signaling proteins. However, information on the interaction kinetics between membrane embedded histidine kinase and its partner proteins is lacking. Here, we report the first analysis of the interactions between the full-length membrane-bound histidine kinase CpxA, which was reconstituted in nanodiscs, and its cognate response regulator CpxR and accessory protein CpxP. Using surface plasmon resonance spectroscopy in combination with interaction map analysis, the affinity of membrane-embedded CpxA for CpxR was quantified, and found to increase by tenfold in the presence of ATP, suggesting that a considerable portion of phosphorylated CpxR might be stably associated with CpxA in vivo. Using microscale thermophoresis, the affinity between CpxA in nanodiscs and CpxP was determined to be substantially lower than that between CpxA and CpxR. Taken together, the quantitative interaction data extend our understanding of the signal transduction mechanism used by two-component systems. PMID:26882435

  11. Multidimensional Scaling versus Components Analysis of Test Intercorrelations.

    ERIC Educational Resources Information Center

    Davison, Mark L.

    1985-01-01

    Considers the relationship between coordinate estimates in components analysis and multidimensional scaling. Reports three small Monte Carlo studies comparing nonmetric scaling solutions to components analysis. Results are related to other methodological issues surrounding research on the general ability factor, response tendencies in…

  12. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  13. Elemental analysis of Egyptian phosphate fertilizer components.

    PubMed

    El-Bahi, S M; El-Dine, N Walley; El-Shershaby, A; Sroor, A

    2004-03-01

    The accumulation of certain elements in vitally important media such as water, soil, and food is undesirable from the medical point of view. It is clear that the fertilizers vary widely in their heavy metals and uranium content. A shielded high purity germanium HPGe detector has been used to measure the natural concentration of 238U, 232Th, and 40K activities in the phosphate fertilizer and its components collected from Abu-Zaabal fertilizers and chemical industries in Egypt. The concentration ranges were 134.97-681.11 Bq kg(-1), 125.23-239.26 Bq kg(-1), and 446.11-882.45 Bq kg(-1) for 238U, 232Th, and 40K, respectively. The absorbed dose rate and external hazard index were found to be from 177.14 to 445.90 nGy h(-1) and 1.03 to 2.71 nGy y(-1), respectively. The concentrations of 22 elements (Be, Na, Mg, Si, P, S, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Sr, Zr, Mo, Cd, Ba) in the samples under investigation were determined by inductively coupled plasma optical-emission spectrometry (ICP-OES). The results for the input raw materials (rock phosphate, limestone and sulfur) and the output product as final fertilizer are presented and discussed. PMID:14982231

  14. Cell-Based Therapies Formulations: Unintended components.

    PubMed

    Atouf, Fouad

    2016-07-01

    Cell-based therapy is the fastest growing segment of regenerative medicine, a field that promises to cure diseases not treated by other small molecules or biological drugs. The use of living cells as the active medicinal ingredient present great opportunities to deliver treatment that can trigger the body's own capacity to regenerate damaged or diseased tissue. Some of the challenges in controlling the quality of the finished cell-therapy product relate to the use of a variety of raw materials including excipients, process aids, and growth promotion factors. The quality of these materials is critical for ensuring the safety and quality of the finished therapeutic products. This review will discuss some of the challenges and opportunities associated with the qualification of excipients as well as that of the ancillary materials used in manufacturing. PMID:27233803

  15. Component-Based Approach in Learning Management System Development

    ERIC Educational Resources Information Center

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  16. Balancing generality and specificity in component-based reuse

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Beck, Jon

    1992-01-01

    For a component industry to be successful, we must move beyond the current techniques of black box reuse and genericity to a more flexible framework supporting customization of components as well as instantiation and composition of components. Customization of components strikes a balanced between creating dozens of variations of a base component and requiring the overhead of unnecessary features of an 'everything but the kitchen sink' component. We argue that design and instantiation of reusable components have competing criteria - design-for-use strives for generality, design-with-reuse strives for specificity - and that providing mechanisms for each can be complementary rather than antagonistic. In particular, we demonstrate how program slicing techniques can be applied to customization of reusable components.

  17. Application of independent component analysis in face images: a survey

    NASA Astrophysics Data System (ADS)

    Huang, Yuchi; Lu, Hanqing

    2003-09-01

    Face technologies which can be applied to access control and surveillance, are essential to intelligent vision-based human computer interaction. The research efforts in this field include face detecting, face recognition, face retrieval, etc. However, these tasks are challenging because of variability in view point, lighting, pose and expression of human faces. The ideal face representation should consider the variability so as to we can develop robust algorithms for our applications. Independent Component Analysis (ICA) as an unsupervised learning technique has been used to find such a representation and obtained good performances in some applications. In the first part of this paper, we depict the models of ICA and its extensions: Independent Subspace Analysis (ISA) and Topographic ICA (TICA).Then we summaraize the process in the applications of ICA and its extension in Face images. At last we propose a promising direction for future research.

  18. Component analysis of the protein hydration entropy

    NASA Astrophysics Data System (ADS)

    Chong, Song-Ho; Ham, Sihyun

    2012-05-01

    We report the development of an atomic decomposition method of the protein solvation entropy in water, which allows us to understand global change in the solvation entropy in terms of local changes in protein conformation as well as in hydration structure. This method can be implemented via a combined approach based on molecular dynamics simulation and integral-equation theory of liquids. An illustrative application is made to 42-residue amyloid-beta protein in water. We demonstrate how this method enables one to elucidate the molecular origin for the hydration entropy change upon conformational transitions of protein.

  19. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  20. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  1. SIFT - A Component-Based Integration Architecture for Enterprise Analytics

    SciTech Connect

    Thurman, David A.; Almquist, Justin P.; Gorton, Ian; Wynne, Adam S.; Chatterton, Jack

    2007-02-01

    Architectures and technologies for enterprise application integration are relatively mature, resulting in a range of standards-based and proprietary middleware technologies. In the domain of complex analytical applications, integration architectures are not so well understood. Analytical applications such as those used in scientific discovery, emergency response, financial and intelligence analysis exert unique demands on their underlying architecture. These demands make existing integration middleware inappropriate for use in enterprise analytics environments. In this paper we describe SIFT (Scalable Information Fusion and Triage), a platform designed for integrating the various components that comprise enterprise analytics applications. SIFT exploits a common pattern for composing analytical components, and extends an existing messaging platform with dynamic configuration mechanisms and scaling capabilities. We demonstrate the use of SIFT to create a decision support platform for quality control based on large volumes of incoming delivery data. The strengths of the SIFT solution are discussed, and we conclude by describing where further work is required to create a complete solution applicable to a wide range of analytical application domains.

  2. Analysis of nuclear power plant component failures

    SciTech Connect

    Not Available

    1984-01-01

    Items are shown that have caused 90% of the nuclear unit outages and/or deratings between 1971 and 1980 and the magnitude of the problem indicated by an estimate of power replacement cost when the units are out of service or derated. The funding EPRI has provided on these specific items for R and D and technology transfer in the past and the funding planned in the future (1982 to 1986) are shown. EPRI's R and D may help the utilities on only a small part of their nuclear unit outage problems. For example, refueling is the major cause for nuclear unit outages or deratings and the steam turbine is the second major cause for nuclear unit outages; however, these two items have been ranked fairly low on the EPRI priority list for R and D funding. Other items such as nuclear safety (NRC requirements), reactor general, reactor and safety valves and piping, and reactor fuel appear to be receiving more priority than is necessary as determined by analysis of nuclear unit outage causes.

  3. Identifying signatures of sexual selection using genomewide selection components analysis

    PubMed Central

    Flanagan, Sarah P; Jones, Adam G

    2015-01-01

    Sexual selection must affect the genome for it to have an evolutionary impact, yet signatures of selection remain elusive. Here we use an individual-based model to investigate the utility of genome-wide selection components analysis, which compares allele frequencies of individuals at different life history stages within a single population to detect selection without requiring a priori knowledge of traits under selection. We modeled a diploid, sexually reproducing population and introduced strong mate choice on a quantitative trait to simulate sexual selection. Genome-wide allele frequencies in adults and offspring were compared using weighted FST values. The average number of outlier peaks (i.e., those with significantly large FST values) with a quantitative trait locus in close proximity (“real” peaks) represented correct diagnoses of loci under selection, whereas peaks above the FST significance threshold without a quantitative trait locus reflected spurious peaks. We found that, even with moderate sample sizes, signatures of strong sexual selection were detectable, but larger sample sizes improved detection rates. The model was better able to detect selection with more neutral markers, and when quantitative trait loci and neutral markers were distributed across multiple chromosomes. Although environmental variation decreased detection rates, the identification of real peaks nevertheless remained feasible. We also found that detection rates can be improved by sampling multiple populations experiencing similar selection regimes. In short, genome-wide selection components analysis is a challenging but feasible approach for the identification of regions of the genome under selection. PMID:26257884

  4. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  5. Volume component analysis for classification of LiDAR data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.

    2015-03-01

    One of the most difficult challenges of working with LiDAR data is the large amount of data points that are produced. Analysing these large data sets is an extremely time consuming process. For this reason, automatic perception of LiDAR scenes is a growing area of research. Currently, most LiDAR feature extraction relies on geometrical features specific to the point cloud of interest. These geometrical features are scene-specific, and often rely on the scale and orientation of the object for classification. This paper proposes a robust method for reduced dimensionality feature extraction of 3D objects using a volume component analysis (VCA) approach.1 This VCA approach is based on principal component analysis (PCA). PCA is a method of reduced feature extraction that computes a covariance matrix from the original input vector. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix are used to describe an image. Block-based PCA is an adapted method for feature extraction in facial images because PCA, when performed in local areas of the image, can extract more significant features than can be extracted when the entire image is considered. The image space is split into several of these blocks, and PCA is computed individually for each block. This VCA proposes that a LiDAR point cloud can be represented as a series of voxels whose values correspond to the point density within that relative location. From this voxelized space, block-based PCA is used to analyze sections of the space where the sections, when combined, will represent features of the entire 3-D object. These features are then used as the input to a support vector machine which is trained to identify four classes of objects, vegetation, vehicles, buildings and barriers with an overall accuracy of 93.8%

  6. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. PMID:25697986

  7. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.

    1992-01-01

    An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.

  8. Key components of financial-analysis education for clinical nurses.

    PubMed

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. PMID:25917407

  9. Arthropod surveillance programs: Basic components, strategies, and analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthro...

  10. PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION

    EPA Science Inventory

    The mathematics behind the techniques of principal component analysis and partial least squares regression is presented in detail, starting from the appropriate extreme conditions. he meaning of the resultant vectors and many of their mathematical interrelationships are also pres...

  11. Physical parameter effects on radar backscatter using principal component analysis

    NASA Astrophysics Data System (ADS)

    Chuah, Hean T.; Teh, K. B.

    1994-12-01

    This paper contains a sensitivity analysis of the effects of physical parameters on radar backscatter coefficients from a vegetation canopy using the method of principal component analysis. A Monte Carlo forward scattering model is used to generate the necessary data set for such analysis. The vegetation canopy is modeled as a layer of randomly distributed circular disks bounded below by a Kirchhoff rough surface. Data reduction is accomplished by the statistical principal component analysis technique in which only three principal components are found to be sufficient, containing 97% of the information in the original set. The first principal component can be interpreted as volume-volume backscatter, while the second and the third as surface backscatter and surface-volume backscatter, respectively. From the correlation matrix obtained, the sensitivity of radar backscatter due to various physical parameters is investigated. These include wave frequency, moisture content, scatterer's size, volume fraction, ground permittivity and surface roughness.

  12. Principal Component Analysis Studies of Turbulence in Optically Thick Gas

    NASA Astrophysics Data System (ADS)

    Correia, C.; Lazarian, A.; Burkhart, B.; Pogosyan, D.; De Medeiros, J. R.

    2016-02-01

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position-position-velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal -3 spectrum in accordance with the predictions of the Lazarian & Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  13. Quantitative Analysis of Porosity and Transport Properties by FIB-SEM 3D Imaging of a Solder Based Sintered Silver for a New Microelectronic Component

    NASA Astrophysics Data System (ADS)

    Rmili, W.; Vivet, N.; Chupin, S.; Le Bihan, T.; Le Quilliec, G.; Richard, C.

    2016-04-01

    As part of development of a new assembly technology to achieve bonding for an innovative silicon carbide (SiC) power device used in harsh environments, the aim of this study is to compare two silver sintering profiles and then to define the best candidate for die attach material for this new component. To achieve this goal, the solder joints have been characterized in terms of porosity by determination of the morphological characteristics of the material heterogeneities and estimating their thermal and electrical transport properties. The three dimensional (3D) microstructure of sintered silver samples has been reconstructed using a focused ion beam scanning electron microscope (FIB-SEM) tomography technique. The sample preparation and the experimental milling and imaging parameters have been optimized in order to obtain a high quality of 3D reconstruction. Volume fractions and volumetric connectivity of the individual phases (silver and voids) have been determined. Effective thermal and electrical conductivities of the samples and the tortuosity of the silver phase have been also evaluated by solving the diffusive transport equation.

  14. Array Independent Component Analysis with Application to Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kukuyeva, Irina A.

    2012-11-01

    There are three ways to learn about an object: from samples taken directly from the site, from simulation studies based on its known scientific properties, or from remote sensing images. All three are carried out to study Earth and Mars. Our goal, however, is to learn about the second largest storm on Jupiter, called the White Oval, whose characteristics are unknown to this day. As Jupiter is a gas giant and hundreds of millions of miles away from Earth, we can only make inferences about the planet from retrieval algorithms and remotely sensed images. Our focus is to find latent variables from the remotely sensed data that best explain its underlying atmospheric structure. Principal Component Analysis (PCA) is currently the most commonly employed technique to do so. For a data set with more than two modes, this approach fails to account for all of the variable interactions, especially if the distribution of the variables is not multivariate normal; an assumption that is rarely true of multispectral images. The thesis presents an overview of PCA along with the most commonly employed decompositions in other fields: Independent Component Analysis, Tucker-3 and CANDECOMP/PARAFAC and discusses their limitations in finding unobserved, independent structures in a data cube. We motivate the need for a novel dimension reduction technique that generalizes existing decompositions to find latent, statistically independent variables for one side of a multimodal (number of modes greater than two) data set while accounting for the variable interactions with its other modes. Our method is called Array Independent Component Analysis (AICA). As the main question of any decomposition is how to select a small number of latent variables that best capture the structure in the data, we extend the heuristic developed by Ceulemans and Kiers in [10] to aid in model selection for the AICA framework. The effectiveness of each dimension reduction technique is determined by the degree of

  15. Rigorous analysis and design of compact photonic components

    NASA Astrophysics Data System (ADS)

    Cai, Jingbo

    Rigorous design of compact photonic components for planar lightwave circuits (PLCs) typically necessitates large and lengthy numerical calculations involving the solution of Maxwell's equations and enforcing proper boundary conditions. Finding ways to shorten the analysis time and to accommodate larger problem sizes without sacrificing accuracy is very important to the success of the design process. As part of my dissertation research, I implemented a rigorous design tool based on a parallelized three-dimensional (3D) Finite Difference Time Domain (FDTD) algorithm to solve optical propagation problems in PLCs. Parallelism allows problems of larger size to be handled and also reduces computational time. Other analysis methods, such as rigorous coupled wave analysis (RCWA) and angular plane wave analysis (APWA), are also used to complement FDTD analysis or to speed up the design process. Using these tools, an ultra-compact high efficiency 90 degree bend based on a hybrid photonic crystal (PhC) and conventional waveguide (CWG) structure is numerically analyzed in three dimensions. The effect of the third dimension on the optical efficiency of the bend is found to be accountable as the clipping of the three-dimensional input waveguide mode by the finite PhC region. A much more efficient analysis method than the 3D FDTD is developed based on this result for the approximate prediction of the optical efficiency of the bend. An ultra-short integrated waveguide polarization converter based on form birefringence is designed with an efficient two-stage approach which fully addresses the 3D nature of the problem. The designed converter achieves a conversion efficiency of 98% in a length of slightly over 4mum with an insertion loss less than 0.5 dB. Moreover, two different methods are adopted for the analysis of single air interface bends (SAIBs) in waveguides. One is the rigorous but time-consuming 3D FDTD method. The other, which is approximate but requires only a very short

  16. Multivariate analysis of intracranial pressure (ICP) signal using principal component analysis.

    PubMed

    Al-Zubi, N; Momani, L; Al-Kharabsheh, A; Al-Nuaimy, W

    2009-01-01

    The diagnosis and treatment of hydrocephalus and other neurological disorders often involve the acquisition and analysis of large amount of intracranial pressure (ICP) signal. Although the analysis and subsequent interpretation of this data is an essential part of the clinical management of the disorders, it is typically done manually by a trained clinician, and the difficulty in interpreting some of the features of this complex time series can sometimes lead to issues of subjectivity and reliability. This paper presents a method for the quantitative analysis of this data using a multivariate approach based on principal component analysis, with the aim of optimising symptom diagnosis, patient characterisation and treatment simulation and personalisation. In this method, 10 features are extracted from the ICP signal and principal components that represent these features are defined and analysed. Results from ICP traces of 40 patients show that the chosen features have relevant information about the ICP signal and can be represented with a few components of the PCA (approximately 91% of the total variance of the data is represented by the first four components of the PCA) and that these components can be helpful in characterising subgroups in the patient population that would otherwise not have been apparent. The introduction of supplementaty (non-ICP) variables has offered insight into additional groupings and relationships which may prove to be a fruitful avenue for exploration. PMID:19964826

  17. A Design for Standards-based Knowledge Components.

    ERIC Educational Resources Information Center

    Anderson, Thor A.; Merrill, M. David

    2000-01-01

    Describes ongoing work in designing modular software components based on open standards and a specific instructional design theoryuinstructional transaction theory. Focuses on applied technological solutions to overcome identified limitations of current authoring environments, including proprietary architectures, instructional design theory…

  18. Reliability-based robust design optimization of vehicle components, Part II: Case studies

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based optimization, the reliability- based sensitivity analysis and robust design method are employed to propose an effective approach for reliability-based robust design optimization of vehicle components in Part I. Applications of the method are further discussed for reliability-based robust optimization of vehicle components in this paper. Examples of axles, torsion bar, coil and composite springs are illustrated for numerical investigations. Results have shown the proposed method is an efficient method for reliability-based robust design optimization of vehicle components.

  19. Application of principal component analysis in phase-shifting photoelasticity.

    PubMed

    Quiroga, Juan A; Gómez-Pedrero, José A

    2016-03-21

    Principal component analysis phase shifting (PCA) is a useful tool for fringe pattern demodulation in phase shifting interferometry. The PCA has no restrictions on background intensity or fringe modulation, and it is a self-calibrating phase sampling algorithm (PSA). Moreover, the technique is well suited for analyzing arbitrary sets of phase-shifted interferograms due to its low computational cost. In this work, we have adapted the standard phase shifting algorithm based on the PCA to the particular case of photoelastic fringe patterns. Compared with conventional PSAs used in photoelasticity, the PCA method does not need calibrated phase steps and, given that it can deal with an arbitrary number of images, it presents good noise rejection properties, even for complicated cases such as low order isochromatic photoelastic patterns. PMID:27136792

  20. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market. PMID:25095276

  1. Probabilistic structural analysis methods for space propulsion system components

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.

  2. Probabilistic structural analysis methods for space propulsion system components

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1987-01-01

    The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The methodology has led to significant technical progress in several important aspects of probabilistic structural analysis. The program and accomplishments to date are summarized.

  3. Using Dynamic Master Logic Diagram for component partial failure analysis

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    A methodology using the Dynamic Master Logic Diagram (DMLD) for the evaluation of component partial failure is presented. Since past PRAs have not focused on partial failure effects, the reliability of components are only based on the binary state assumption, i.e. defining a component as fully failed or functioning. This paper is to develop an approach to predict and estimate the component partial failure on the basis of the fuzzy state assumption. One example of the application of this methodology with the reliability function diagram of a centrifugal pump is presented.

  4. Blind Extraction of an Exoplanetary Spectrum through Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.; Tinetti, G.; Deroo, P.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J.

    2013-03-01

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a "blind" analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of ~0.09 μm. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  5. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    SciTech Connect

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J.; Deroo, P.

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  6. Principal component analysis for LISA: The time delay interferometry connection

    SciTech Connect

    Romano, J.D.; Woan, G.

    2006-05-15

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We present a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.

  7. Principal Component Analysis for pattern recognition in volcano seismic spectra

    NASA Astrophysics Data System (ADS)

    Unglert, Katharina; Jellinek, A. Mark

    2016-04-01

    Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.

  8. Principal Component Analysis of Spectroscopic Imaging Data in Scanning Probe Microscopy

    SciTech Connect

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    The approach for data analysis in band excitation family of scanning probe microscopies based on principal component analysis (PCA) is explored. PCA utilizes the similarity between spectra within the image to select the relevant response components. For small signal variations within the image, the PCA components coincide with the results of deconvolution using simple harmonic oscillator model. For strong signal variations, the PCA allows effective approach to rapidly process, de-noise and compress the data. The extension of PCA for correlation function analysis is demonstrated. The prospects of PCA as a universal tool for data analysis and representation in multidimensional SPMs are discussed.

  9. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    ERIC Educational Resources Information Center

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  10. Knowledge-guided gene ranking by coordinative component analysis

    PubMed Central

    2010-01-01

    Background In cancer, gene networks and pathways often exhibit dynamic behavior, particularly during the process of carcinogenesis. Thus, it is important to prioritize those genes that are strongly associated with the functionality of a network. Traditional statistical methods are often inept to identify biologically relevant member genes, motivating researchers to incorporate biological knowledge into gene ranking methods. However, current integration strategies are often heuristic and fail to incorporate fully the true interplay between biological knowledge and gene expression data. Results To improve knowledge-guided gene ranking, we propose a novel method called coordinative component analysis (COCA) in this paper. COCA explicitly captures those genes within a specific biological context that are likely to be expressed in a coordinative manner. Formulated as an optimization problem to maximize the coordinative effort, COCA is designed to first extract the coordinative components based on a partial guidance from knowledge genes and then rank the genes according to their participation strengths. An embedded bootstrapping procedure is implemented to improve statistical robustness of the solutions. COCA was initially tested on simulation data and then on published gene expression microarray data to demonstrate its improved performance as compared to traditional statistical methods. Finally, the COCA approach has been applied to stem cell data to identify biologically relevant genes in signaling pathways. As a result, the COCA approach uncovers novel pathway members that may shed light into the pathway deregulation in cancers. Conclusion We have developed a new integrative strategy to combine biological knowledge and microarray data for gene ranking. The method utilizes knowledge genes for a guidance to first extract coordinative components, and then rank the genes according to their contribution related to a network or pathway. The experimental results show that

  11. Sparse principal component analysis by choice of norm

    PubMed Central

    Luo, Ruiyan; Zhao, Hongyu

    2012-01-01

    Recent years have seen the developments of several methods for sparse principal component analysis due to its importance in the analysis of high dimensional data. Despite the demonstration of their usefulness in practical applications, they are limited in terms of lack of orthogonality in the loadings (coefficients) of different principal components, the existence of correlation in the principal components, the expensive computation needed, and the lack of theoretical results such as consistency in high-dimensional situations. In this paper, we propose a new sparse principal component analysis method by introducing a new norm to replace the usual norm in traditional eigenvalue problems, and propose an efficient iterative algorithm to solve the optimization problems. With this method, we can efficiently obtain uncorrelated principal components or orthogonal loadings, and achieve the goal of explaining a high percentage of variations with sparse linear combinations. Due to the strict convexity of the new norm, we can prove the convergence of the iterative method and provide the detailed characterization of the limits. We also prove that the obtained principal component is consistent for a single component model in high dimensional situations. As illustration, we apply this method to real gene expression data with competitive results. PMID:23524453

  12. A component based software framework for vision measurement

    NASA Astrophysics Data System (ADS)

    He, Lingsong; Bei, Lei

    2011-12-01

    In vision measurement applications, it is usually used to achieve an optimal result by combing different processing steps and algorithms .This paper proposes a component based software framework for vision measurement. First, commonly used processing algorithms of vision measurement are encapsulated into components that contained in a components library. The component which is designed to have its own properties also provides I/O interfaces for extern calls. Second, a software bus is proposed which can plug components and assemble them to form a vision measurement application. Besides components managing and data line linking, the software bus also provides service of message distribution, which is used to drive all the plugged components working properly. Third, a XML based script language is proposed to record the plugging and assembling process of a vision measurement application, which can be used to rebuild the vision measurement application later. At last, based on this framework, an application of landmark extraction that applied in camera calibration is introduced to show how it works.

  13. Chemical components determination via terahertz spectroscopic statistical analysis using microgenetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Ma, Yong; Lu, Zheng; Xia, Zhi-Ning; Cheng, Hong

    2011-03-01

    In public security related applications, many suspicious samples may be a mixture of various chemical components that makes the usual spectral analysis difficult. In this paper, a terahertz spectroscopic statistical analysis method using a microgenetic algorithm (Micro-GA) has been proposed. Various chemical components in the mixture can be identified and the concentration of each component can be estimated based on the known spectral data of the pure chemical components. Five chemical mixtures have been tested using Micro-GA. The simulation results have shown agreement with other analytical methods. It is suggested that Micro-GA has potential applications for terahertz spectral identifications of chemical mixtures.

  14. A principal component analysis of transmission spectra of wine distillates

    NASA Astrophysics Data System (ADS)

    Rogovaya, M. V.; Sinitsyn, G. V.; Khodasevich, M. A.

    2014-11-01

    A chemometric method of decomposing multidimensional data into a small-sized space, the principal component method, has been applied to the transmission spectra of vintage Moldovan wine distillates. A sample of 42 distillates aged from four to 7 years from six producers has been used to show the possibility of identifying a producer in a two-dimensional space of principal components describing 94.5% of the data-matrix dispersion. Analysis of the loads into the first two principal components has shown that, in order to measure the optical characteristics of the samples under study using only two wavelengths, it is necessary to select 380 and 540 nm, instead of the standard 420 and 520 nm, to describe the variability of the distillates by one principal component or 370 and 520 nm to describe the variability by two principal components.

  15. Bonding and Integration Technologies for Silicon Carbide Based Injector Components

    NASA Technical Reports Server (NTRS)

    Halbig, Michael C.; Singh, Mrityunjay

    2008-01-01

    Advanced ceramic bonding and integration technologies play a critical role in the fabrication and application of silicon carbide based components for a number of aerospace and ground based applications. One such application is a lean direct injector for a turbine engine to achieve low NOx emissions. Ceramic to ceramic diffusion bonding and ceramic to metal brazing technologies are being developed for this injector application. For the diffusion bonding, titanium interlayers (PVD and foils) were used to aid in the joining of silicon carbide (SiC) substrates. The influence of such variables as surface finish, interlayer thickness (10, 20, and 50 microns), processing time and temperature, and cooling rates were investigated. Microprobe analysis was used to identify the phases in the bonded region. For bonds that were not fully reacted an intermediate phase, Ti5Si3Cx, formed that is thermally incompatible in its thermal expansion and caused thermal stresses and cracking during the processing cool-down. Thinner titanium interlayers and/or longer processing times resulted in stable and compatible phases that did not contribute to microcracking and resulted in an optimized microstructure. Tensile tests on the joined materials resulted in strengths of 13-28 MPa depending on the SiC substrate material. Non-destructive evaluation using ultrasonic immersion showed well formed bonds. For the joining technology of brazing Kovar fuel tubes to silicon carbide, preliminary development of the joining approach has begun. Various technical issues and requirements for the injector application are addressed.

  16. Critical Components of Effective School-Based Feeding Improvement Programs

    ERIC Educational Resources Information Center

    Bailey, Rita L.; Angell, Maureen E.

    2004-01-01

    This article identifies critical components of effective school-based feeding improvement programs for students with feeding problems. A distinction is made between typical school-based feeding management and feeding improvement programs, where feeding, independent functioning, and mealtime behaviors are the focus of therapeutic strategies.…

  17. Using Independent Component Analysis to Separate Signals in Climate Data

    SciTech Connect

    Fodor, I K; Kamath, C

    2003-01-28

    Global temperature series have contributions from different sources, such as volcanic eruptions and El Nino Southern Oscillation variations. We investigate independent component analysis as a technique to separate unrelated sources present in such series. We first use artificial data, with known independent components, to study the conditions under which ICA can separate the individual sources. We then illustrate the method with climate data from the National Centers for Environmental Prediction.

  18. [Advances in independent component analysis and its application].

    PubMed

    Chen, Huafu; Yao, Dezhong

    2003-06-01

    The independent component analysis (ICA) is a new technique in statistical signal processing, which decomposes mixed signals into statistical independent components. The reported applications in biomedical and radar signal have demonstrated its good prospect in various blind signal separation. In this paper, the progress of ICA in such as its principle, algorithm and application and advance direction of ICA in future is reviewed. The aim is to promote the research in theory and application in the future. PMID:12856621

  19. Parallel PDE-Based Simulations Using the Common Component Architecture

    SciTech Connect

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-03-05

    Summary. The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of componentbased software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and generalpurpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications.

  20. Abstract Interfaces for Data Analysis - Component Architecture for Data Analysis Tools

    SciTech Connect

    Barrand, Guy

    2002-08-20

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualization), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organization, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimizing re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. This paper gives an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  1. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform. PMID:15362128

  2. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots

    NASA Astrophysics Data System (ADS)

    Pereira, Jorge Costa; Azevedo, Julio Cesar R.; Knapik, Heloise G.; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  3. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots.

    PubMed

    Pereira, Jorge Costa; Azevedo, Julio Cesar R; Knapik, Heloise G; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions). PMID:27111155

  4. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    NASA Astrophysics Data System (ADS)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  5. Principal Component Analysis for Enhancement of Infrared Spectra Monitoring

    NASA Astrophysics Data System (ADS)

    Haney, Ricky Lance

    The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air

  6. Reliability and Creep/Fatigue Analysis of a CMC Component

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Gyekenyesi, John Z.; Gyekenyesi, John P.

    2007-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight and enable higher operating temperatures requiring less cooling; thus leading to increased engine efficiencies. There is a need for convenient design tools that can accommodate various loading conditions and material data with their associated uncertainties to estimate the minimum predicted life as well as the failure probabilities of a structural component. This paper presents a review of the life prediction and probabilistic analyses performed for a CMC turbine stator vane. A computer code, NASALife, is used to predict the life of a 2-D woven silicon carbide fiber reinforced silicon carbide matrix (SiC/SiC) turbine stator vane due to a mission cycle which induces low cycle fatigue and creep. The output from this program includes damage from creep loading, damage due to cyclic loading and the combined damage due to the given loading cycle. Results indicate that the trends predicted by NASALife are as expected for the loading conditions used for this study. In addition, a combination of woven composite micromechanics, finite element structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Results indicate that reducing the scatter in proportional limit strength of the vane material has the greatest effect in improving the overall reliability of the CMC vane.

  7. Failure Rate Data Analysis for High Technology Components

    SciTech Connect

    L. C. Cadwallader

    2007-07-01

    Understanding component reliability helps designers create more robust future designs and supports efficient and cost-effective operations of existing machines. The accelerator community can leverage the commonality of its high-vacuum and high-power systems with those of the magnetic fusion community to gain access to a larger database of reliability data. Reliability studies performed under the auspices of the International Energy Agency are the result of an international working group, which has generated a component failure rate database for fusion experiment components. The initial database work harvested published data and now analyzes operating experience data. This paper discusses the usefulness of reliability data, describes the failure rate data collection and analysis effort, discusses reliability for components with scarce data, and points out some of the intersections between magnetic fusion experiments and accelerators.

  8. Lung nodules detection in chest radiography: image components analysis

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Mou, Xuanqin; Yang, Ying; Yan, Hao

    2009-02-01

    We aimed to evaluate the effect of different components of chest image on performances of both human observer and channelized Fisher-Hotelling model (CFH) in nodule detection task. Irrelevant and relevant components were separated from clinical chest radiography by employing Principal Component Analysis (PCA) methods. Human observer performance was evaluated in two-alternative forced-choice (2AFC) on original clinical images and anatomical structure only images obtained by PCA methods. Channelized Fisher-Hotelling model with Laguerre-Gauss basis function was evaluated to predict human performance. We show that relevant component is the primary factor influencing on nodule detection in chest radiography. There is obvious difference of detectability between human observer and CFH model for nodule detection in images only containing anatomical structure. CFH model should be used more carefully.

  9. Application of independent component analysis to Fermilab Booster

    SciTech Connect

    Huang, X.B.; Lee, S.Y.; Prebys, E.; Tomlin, R.; /Indiana U. /Fermilab

    2005-01-01

    Autocorrelation is applied to analyze sets of finite-sampling data such as the turn-by-turn beam position monitor (BPM) data in an accelerator. This method of data analysis, called the independent component analysis (ICA), is shown to be a powerful beam diagnosis tool for being able to decompose sampled signals into its underlying source signals. They find that the ICA has an advantage over the principle component analysis (PCA) used in the model-independent analysis (MIA) in isolating independent modes. The tolerance of the ICA method to noise in the BPM system is systematically studied. The ICA is applied to analyze the complicated beam motion in a rapid-cycling booster synchrotron at the Fermilab. Difficulties and limitations of the ICA method are also discussed.

  10. Spatially Weighted Principal Component Analysis for Imaging Classification

    PubMed Central

    Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu

    2014-01-01

    The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629

  11. Partial Component Analysis of a Comprehensive Smoking Program.

    ERIC Educational Resources Information Center

    Horan, John J.; Hackett, Gail

    The effects of a comprehensive program for the treatment of cigarette addiction were investigated. Subjects were 18 university students and 12 community members. Abstinence levels of 40 percent, verified by expired air carbon monoxide tests, were achieved in a six to nine month follow-up period. A partial component analysis revealed that the…

  12. State machine components selection based on minimal transversals

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Łukasz; Mróz, Piotr

    2015-12-01

    The article relates to the problem of State Machine Components selection using hypergraphs theory. The base method of exact transversals was presented as well as exact transversal and simple transversal computation. Due to limitations of xt-hypergraph application, authors proposed to extend the baseline method by usage of minimal transversals.

  13. Principal Components Analysis of Triaxial Vibration Data From Helicopter Transmissions

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Huff, Edward M.

    2001-01-01

    Research on the nature of the vibration data collected from helicopter transmissions during flight experiments has led to several crucial observations believed to be responsible for the high rates of false alarms and missed detections in aircraft vibration monitoring systems. This work focuses on one such finding, namely, the need to consider additional sources of information about system vibrations. In this light, helicopter transmission vibration data, collected using triaxial accelerometers, were explored in three different directions, analyzed for content, and then combined using Principal Components Analysis (PCA) to analyze changes in directionality. In this paper, the PCA transformation is applied to 176 test conditions/data sets collected from an OH58C helicopter to derive the overall experiment-wide covariance matrix and its principal eigenvectors. The experiment-wide eigenvectors. are then projected onto the individual test conditions to evaluate changes and similarities in their directionality based on the various experimental factors. The paper will present the foundations of the proposed approach, addressing the question of whether experiment-wide eigenvectors accurately model the vibration modes in individual test conditions. The results will further determine the value of using directionality and triaxial accelerometers for vibration monitoring and anomaly detection.

  14. Cosmic reionization study: principle component analysis after Planck

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hong; Li, Si-Yu; Li, Yong-Ping; Zhang, Xinmin

    2016-02-01

    The study of reionization history plays an important role in understanding the evolution of our universe. It is commonly believed that the intergalactic medium (IGM) in our universe are fully ionized today, however the reionizing process remains to be mysterious. A simple instantaneous reionization process is usually adopted in modern cosmology without direct observational evidence. However, the history of ionization fraction, xe(z) will influence CMB observables and constraints on optical depth τ. With the mocked future data sets based on featured reionization model, we find the bias on τ introduced by instantaneous model can not be neglected. In this paper, we study the cosmic reionization history in a model independent way, the so called principle component analysis (PCA) method, and reconstruct xe (z) at different redshift z with the data sets of Planck, WMAP 9 years temperature and polarization power spectra, combining with the baryon acoustic oscillation (BAO) from galaxy survey and type Ia supernovae (SN) Union 2.1 sample respectively. The results show that reconstructed xe(z) is consistent with instantaneous behavior, however, there exists slight deviation from this behavior at some epoch. With PCA method, after abandoning the noisy modes, we get stronger constraints, and the hints for featured xe(z) evolution could become a little more obvious.

  15. Major component analysis of dynamic networks of physiologic organ interactions

    NASA Astrophysics Data System (ADS)

    Liu, Kang K. L.; Bartsch, Ronny P.; Ma, Qianli D. Y.; Ivanov, Plamen Ch

    2015-09-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function.

  16. Model-based reconstruction of objects with inexactly known components

    NASA Astrophysics Data System (ADS)

    Stayman, J. W.; Otake, Y.; Schafer, S.; Khanna, A. J.; Prince, J. L.; Siewerdsen, J. H.

    2012-03-01

    Because tomographic reconstructions are ill-conditioned, algorithms that incorporate additional knowledge about the imaging volume generally have improved image quality. This is particularly true when measurements are noisy or have missing data. This paper presents a general framework for inclusion of the attenuation contributions of specific component objects known to be in the field-of-view as part of the reconstruction. Components such as surgical devices and tools may be modeled explicitly as being part of the attenuating volume but are inexactly known with respect to their locations poses, and possible deformations. The proposed reconstruction framework, referred to as Known-Component Reconstruction (KCR), is based on this novel parameterization of the object, a likelihood-based objective function, and alternating optimizations between registration and image parameters to jointly estimate the both the underlying attenuation and unknown registrations. A deformable KCR (dKCR) approach is introduced that adopts a control pointbased warping operator to accommodate shape mismatches between the component model and the physical component, thereby allowing for a more general class of inexactly known components. The KCR and dKCR approaches are applied to low-dose cone-beam CT data with spine fixation hardware present in the imaging volume. Such data is particularly challenging due to photon starvation effects in projection data behind the metallic components. The proposed algorithms are compared with traditional filtered-backprojection and penalized-likelihood reconstructions and found to provide substantially improved image quality. Whereas traditional approaches exhibit significant artifacts that complicate detection of breaches or fractures near metal, the KCR framework tends to provide good visualization of anatomy right up to the boundary of surgical devices.

  17. Enantiomeric separation of functionalized ethano-bridged Tröger bases using macrocyclic cyclofructan and cyclodextrin chiral selectors in high-performance liquid chromatography and capillary electrophoresis with application of principal component analysis.

    PubMed

    Weatherly, Choyce A; Na, Yun-Cheol; Nanayakkara, Yasith S; Woods, Ross M; Sharma, Ankit; Lacour, Jérôme; Armstrong, Daniel W

    2014-04-01

    The enantiomeric separation of a series of racemic functionalized ethano-bridged Tröger base compounds was examined by high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). Using HPLC and CE the entire set of 14 derivatives was separated by chiral stationary phases (CSPs) and chiral additives composed of cyclodextrin (native and derivatized) and cyclofructan (derivatized). Baseline separations (Rs≥1.5) in HPLC were achieved for 13 of the 14 compounds with resolution values as high as 5.0. CE produced 2 baseline separations. The separations on the cyclodextrin CSPs showed optimum results in the reversed phase mode, and the LARIHC™ cyclofructan CSPs separations showed optimum results in the normal phase mode. HPLC separation data of the compounds was analyzed using principal component analysis (PCA). The PCA biplot analysis showed that retention is governed by the size of the R1 substituent in the case of derivatized cyclofructan and cyclodextrin CSPs, and enantiomeric resolution closely correlated with the size of the R2 group in the case of non-derivatized γ-cyclodextrin CSP. It is clearly shown that chromatographic retention is necessary but not sufficient for the enantiomeric separations of these compounds. PMID:24631813

  18. Reprint of: Enantiomeric separation of functionalized ethano-bridged Tröger bases using macrocyclic cyclofructan and cyclodextrin chiral selectors in high-performance liquid chromatography and capillary electrophoresis with application of principal component analysis.

    PubMed

    Weatherly, Choyce A; Na, Yun-Cheol; Nanayakkara, Yasith S; Woods, Ross M; Sharma, Ankit; Lacour, Jérôme; Armstrong, Daniel W

    2014-10-01

    The enantiomeric separation of a series of racemic functionalized ethano-bridged Tröger base compounds was examined by high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). Using HPLC and CE the entire set of 14 derivatives was separated by chiral stationary phases (CSPs) and chiral additives composed of cyclodextrin (native and derivatized) and cyclofructan (derivatized). Baseline separations (Rs ≥ 1.5) in HPLC were achieved for 13 of the 14 compounds with resolution values as high as 5.0. CE produced 2 baseline separations. The separations on the cyclodextrin CSPs showed optimum results in the reversed phase mode, and the LARIHC cyclofructan CSPs separations showed optimum results in the normal phase mode. HPLC separation data of the compounds was analyzed using principal component analysis (PCA). The PCA biplot analysis showed that retention is governed by the size of the R1 substituent in the case of derivatized cyclofructan and cyclodextrin CSPs, and enantiomeric resolution closely correlated with the size of the R2 group in the case of non-derivatized γ-cyclodextrin CSP. It is clearly shown that chromatographic retention is necessary but not sufficient for the enantiomeric separations of these compounds. PMID:24910297

  19. Factor analysis for isolation of the Raman spectra of aqueous sulfuric acid components

    SciTech Connect

    Malinowski, E.R.; Cox, R.A.; Haldna, U.L.

    1984-04-01

    The Raman spectra of 16 sulfuric acid/water mixtures over the entire mole fraction range were studied by various factor analysis techniques. Abstract factor analysis showed that three factors account for 98.69% of the variation in the data with a real error of 13%. Key-set factor analysis, was used to identify three spectral wavenumbers unique to each component. Spectral-isolation factor analysis, based on the key wavenumbers, revealed the spectra of each unknown component. Target factor analysis, based on the isolated spectra, yielded the relative amounts of the three spectral components. The concentration profiles obtained from the factor loadings, as well as the isolated spectra, were used to identify the chemical species.

  20. Performance-based seismic design of nonstructural building components: The next frontier of earthquake engineering

    NASA Astrophysics Data System (ADS)

    Filiatrault, Andre; Sullivan, Timothy

    2014-08-01

    With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major

  1. Guide for Hydrogen Hazards Analysis on Components and Systems

    NASA Technical Reports Server (NTRS)

    Beeson, Harold; Woods, Stephen

    2003-01-01

    The physical and combustion properties of hydrogen give rise to hazards that must be considered when designing and operating a hydrogen system. One of the major concerns in the use of hydrogen is that of fire or detonation because of hydrogen's wide flammability range, low ignition energy, and flame speed. Other concerns include the contact and interaction of hydrogen with materials, such as the hydrogen embrittlement of materials and the formation of hydrogen hydrides. The low temperature of liquid and slush hydrogen bring other concerns related to material compatibility and pressure control; this is especially important when dissimilar, adjoining materials are involved. The potential hazards arising from these properties and design features necessitate a proper hydrogen hazards analysis before introducing a material, component, or system into hydrogen service. The objective of this guide is to describe the NASA Johnson Space Center White Sands Test Facility hydrogen hazards analysis method that should be performed before hydrogen is used in components and/or systems. The method is consistent with standard practices for analyzing hazards. It is recommended that this analysis be made before implementing a hydrogen component qualification procedure. A hydrogen hazards analysis is a useful tool for hydrogen-system designers, system and safety engineers, and facility managers. A hydrogen hazards analysis can identify problem areas before hydrogen is introduced into a system-preventing damage to hardware, delay or loss of mission or objective, and possible injury or loss of life.

  2. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  3. Component-Based Software for High-Performance Scientific Computing

    SciTech Connect

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis B.; Janssen, Curtis; Kenny, Joseph P.; Krishnan, Manoj Kumar; Kohl, James A.; Kumfert, Gary K.; McInnes, Lois C.; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-06-26

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  4. Principal components analysis of Mars in the near-infrared

    NASA Astrophysics Data System (ADS)

    Klassen, David R.

    2009-11-01

    Principal components analysis and target transformation are applied to near-infrared image cubes of Mars in a study to disentangle the spectra into a small number of spectral endmembers and characterize the spectral information. The image cubes are ground-based telescopic data from the NASA Infrared Telescope Facility during the 1995 and 1999 near-aphelion oppositions when ice clouds were plentiful [ Clancy, R.T., Grossman, A.W., Wolff, M.J., James, P.B., Rudy, D.J., Billawala, Y.N., Sandor, B.J., Lee, S.W., Muhleman, D.O., 1996. Icarus 122, 36-62; Wolff, M.J., Clancy, R.T., Whitney, B.A., Christensen, P.R., Pearl, J.C., 1999b. In: The Fifth International Conference on Mars, July 19-24, 1999, Pasadena, CA, pp. 6173], and the 2003 near-perihelion opposition when ice clouds are generally limited to topographically high regions (volcano cap clouds) but airborne dust is more common [ Martin, L.J., Zurek, R.W., 1993. J. Geophys. Res. 98 (E2), 3221-3246]. The heart of the technique is to transform the data into a vector space along the dimensions of greatest spectral variance and then choose endmembers based on these new "trait" dimensions. This is done through a target transformation technique, comparing linear combinations of the principal components to a mineral spectral library. In general Mars can be modeled, on the whole, with only three spectral endmembers which account for almost 99% of the data variance. This is similar to results in the thermal infrared with Mars Global Surveyor Thermal Emission Spectrometer data [Bandfield, J.L., Hamilton, V.E., Christensen, P.R., 2000. Science 287, 1626-1630]. The globally recovered surface endmembers can be used as inputs to radiative transfer modeling in order to measure ice abundance in martian clouds [Klassen, D.R., Bell III, J.F., 2002. Bull. Am. Astron. Soc. 34, 865] and a preliminary test of this technique is also presented.

  5. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  6. Analysis of exposure due to work on activated components

    SciTech Connect

    Cossairt, J.D.

    1987-09-01

    In this brief note the author summarized analysis of the exposure incurred in various maintenance jobs involving activated accelerator and beam line components at Fermilab. A tabulation was made of parameters associated with each job. Included are rather terse descriptions of the various tasks. The author presented various plots of the quantities in the table. All exposure rates are mR/hr while all exposures accumulated are mR. The exposure rates were generally measured at the Fermilab standard one foot distance from the activated component. Accumulated exposures are taken from the self-reading pocket dosimeter records maintained by the radiation control technicians.

  7. Temporal variations in ozone concentrations derived from Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Yonemura, S.; Kawashima, S.; Matsueda, H.; Sawa, Y.; Inoue, S.; Tanimoto, H.

    2008-03-01

    The application of principal components and cluster analysis to vertical ozone concentration profiles in Tsukuba, Japan, has been explored. Average monthly profiles and profiles of the ratio between standard deviation and the absolute ozone concentration (SDPR) of 1 km data were calculated from the original ozone concentration data. Mean (first) and gradient (second) components explained more than 80% of the variation in both the 0-6 km tropospheric and 11-20 km troposphere-stratosphere (interspheric) layers. The principal components analysis not only reproduced the expected inverse relationship between mean ozone concentration and tropopause height ( r 2 = 0.41) and that in the tropospheric layer this is larger in spring and summer, but also yielded new information as follows. The larger gradient component score in summer for the interspheric layer points to the seasonal variation of the troposphere-stratosphere exchange. The minimum SDPR was at about 3 km in the tropospheric layer and the maximum was at about 17 km in the interspheric layer. The tropospheric SDPR mean component score was larger in summer, possibly reflecting the mixing of Pacific maritime air masses with urban air masses. The cluster analysis of the monthly ozone profiles for the 1970s and 2000s revealed different patterns for winter and summer. The month of May was part of the winter pattern in the 1970s but part of the summer pattern during the 2000s. This statistically detected change likely reflects the influence of global warming. Thus, these two statistical analysis techniques can be powerful tools for identifying features of ozone concentration profiles.

  8. Multi-component stress history measurements and analysis

    SciTech Connect

    Stout, R.B.; Larson, D.B.

    1987-08-01

    Piezoresistance foil gages were tested dynamically in multi-component stress-strain experiments in order that the actual shock wave conditions of underground nuclear testing could be more closely simulated. The multi-component stress-strain histories were created in polymethylmethacrylate (PMMA) by using chemical explosions to generate spherical shock waves. In addition to the resistivity measurements from the foil gages, particle velocity was also measured at several radial positions from the explosion to provide a complete set of data for analysis. The gage interpretation (inverse) problem for multi-component stress-strain fields requires obtaining a sufficient number of independent measurements so that the different stress-strain components influencing the gage response can be uniquely inferred. The piezoresistance measurements provided data from a triple material foil gage and from ytterbium foil gages (bare gages). An analysis shows that the triple material gage containing foils of ytterbium, manganin, and constantan provided three independent resistivity measurements for the gage oriented in a perpendicular direction relative to the radial propagating shock front. An analysis of the ytterbium foil gages, which were tested in both perpendicular (normal) and parallel (tangential) directions relative to the radial shock front, show the resistivity responses from these two orientations are independent measurements. The results from the analyses of the gages compared well with experimental data. This analysis shows clearly that the material properties of the foil, the dimensions of the foil, and the material surrounding the foil greatly influence the total resistivity response of foil gages in a multi-component stress-strain field. 25 refs., 16 figs.

  9. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  10. Insights Into Categorization Of Solar Flares Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Norquist, D. C.

    2012-05-01

    Using time sequences of solar chromospheric images acquired using the USAF/NSO Improved Solar Observing Network (ISOON) prototype telescope, we have applied principal component analysis (PCA) to time-series of both erupting and non-erupting active regions. Our primary purpose is to develop an advanced data driven model for solar flare prediction using machine learning algorithms, with principal components as the input. Using the principal components we show a clear separation in the Eigen vectors. Eigen vectors fall into three major flaring categories: weak flares (GOES peak intensity < C4.0; intermediary flares (GOES peak intensity between C4.0 and C8.0) and, strong flares (GOES peak intensity > C8.0). In this paper, we will provide insights into implications for the underlying physical mechanisms that describe these three distinct categories. This work funded by the U. S. Air Force Office of Scientific Research (AFOSR).

  11. Probabilistic structural analysis methods for critical SSME propulsion components

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    The development of a three-dimensional inelastic analysis methodology for the Space Shuttle main engine (SSME) structural components is described. The methodology is composed of: (1) composite load spectra, (2) probabilistic structural analysis methods, (3) the probabilistic finite element theory, and (4) probabilistic structural analysis. The progress in the development of generic probabilistic models for various individual loads which consist of a steady state load, a periodic load, a random load, and a spike, is discussed. The capabilities of the Numerical Evaluation of Stochastic Structures Under Stress finite element code designed for probabilistic structural analysis of the SSME are examined. Variation principles for formulation probabilistic finite elements and a structural analysis for evaluating the geometric and material properties tolerances on the structural response of turbopump blades are being designed.

  12. Principal Component Analysis of Terrestrial and Venusian Topography

    NASA Astrophysics Data System (ADS)

    Stoddard, P. R.; Jurdy, D. M.

    2015-12-01

    We use Principal Component Analysis (PCA) as an objective tool in analyzing, comparing, and contrasting topographic profiles of different/similar features from different locations and planets. To do so, we take average profiles of a set of features and form a cross-correlation matrix, which is then diagonalized to determine its principal components. These components, not merely numbers, represent actual profile shapes that give a quantitative basis for comparing different sets of features. For example, PCA for terrestrial hotspots shows the main component as a generic dome shape. Secondary components show a more sinusoidal shape, related to the lithospheric loading response, and thus give information about the nature of the lithosphere setting of the various hotspots. We examine a range of terrestrial spreading centers: fast, slow, ultra-slow, incipient, and extinct, and compare these to several chasmata on Venus (including Devana, Ganis, Juno, Parga, and Kuanja). For upwelling regions, we consider the oceanic Hawaii, Reunion, and Iceland hotspots and Yellowstone, a prototypical continental hotspot. Venus has approximately one dozen broad topographic and geoid highs called regiones. Our analysis includes Atla, Beta, and W. Eistla regiones. Atla and Beta are widely thought to be the most likely to be currently or recently active. Analysis of terrestrial rifts suggests shows increasing uniformity of shape among rifts with increasing spreading rates. Venus' correlations of uniformity rank considerably lower than the terrestrial ones. Extrapolating the correlation/spreading rate suggests that Venus' chasmata, if analogous to terrestrial spreading centers, most resemble the ultra-slow spreading level (less than 12mm/yr) of the Arctic Gakkel ridge. PCA will provide an objective measurement of this correlation.

  13. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  14. Common and Cluster-Specific Simultaneous Component Analysis

    PubMed Central

    De Roover, Kim; Timmerman, Marieke E.; Mesquita, Batja; Ceulemans, Eva

    2013-01-01

    In many fields of research, so-called ‘multiblock’ data are collected, i.e., data containing multivariate observations that are nested within higher-level research units (e.g., inhabitants of different countries). Each higher-level unit (e.g., country) then corresponds to a ‘data block’. For such data, it may be interesting to investigate the extent to which the correlation structure of the variables differs between the data blocks. More specifically, when capturing the correlation structure by means of component analysis, one may want to explore which components are common across all data blocks and which components differ across the data blocks. This paper presents a common and cluster-specific simultaneous component method which clusters the data blocks according to their correlation structure and allows for common and cluster-specific components. Model estimation and model selection procedures are described and simulation results validate their performance. Also, the method is applied to data from cross-cultural values research to illustrate its empirical value. PMID:23667463

  15. Principal Component Analysis of Arctic Solar Irradiance Spectra

    NASA Technical Reports Server (NTRS)

    Rabbette, Maura; Pilewskie, Peter; Gore, Warren J. (Technical Monitor)

    2000-01-01

    During the FIRE (First ISCPP Regional Experiment) Arctic Cloud Experiment and coincident SHEBA (Surface Heat Budget of the Arctic Ocean) campaign, detailed moderate resolution solar spectral measurements were made to study the radiative energy budget of the coupled Arctic Ocean - Atmosphere system. The NASA Ames Solar Spectral Flux Radiometers (SSFRs) were deployed on the NASA ER-2 and at the SHEBA ice camp. Using the SSFRs we acquired continuous solar spectral irradiance (380-2200 nm) throughout the atmospheric column. Principal Component Analysis (PCA) was used to characterize the several tens of thousands of retrieved SSFR spectra and to determine the number of independent pieces of information that exist in the visible to near-infrared solar irradiance spectra. It was found in both the upwelling and downwelling cases that almost 100% of the spectral information (irradiance retrieved from 1820 wavelength channels) was contained in the first six extracted principal components. The majority of the variability in the Arctic downwelling solar irradiance spectra was explained by a few fundamental components including infrared absorption, scattering, water vapor and ozone. PCA analysis of the SSFR upwelling Arctic irradiance spectra successfully separated surface ice and snow reflection from overlying cloud into distinct components.

  16. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    PubMed Central

    Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin

    2014-01-01

    Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728

  17. EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation.

    PubMed

    Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin

    2014-01-01

    Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728

  18. Incorporating finite element analysis into component life and reliability

    NASA Technical Reports Server (NTRS)

    August, Richard; Zaretsky, Erwin V.

    1991-01-01

    A method for calculating a component's design survivability by incorporating finite element analysis and probabilistic material properties was developed. The method evaluates design parameters through direct comparisons of component survivability expressed in terms of Weibull parameters. The analysis was applied to a rotating disk with mounting bolt holes. The highest probability of failure occurred at, or near, the maximum shear stress region of the bolt holes. Distribution of failure as a function of Weibull slope affects the probability of survival. Where Weibull parameters are unknown for a rotating disk, it may be permissible to assume Weibull parameters, as well as the stress-life exponent, in order to determine the disk speed where the probability of survival is highest.

  19. Principal component analysis: a review and recent developments.

    PubMed

    Jolliffe, Ian T; Cadima, Jorge

    2016-04-13

    Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application. PMID:26953178

  20. Independent component analysis for underwater lidar clutter rejection

    NASA Astrophysics Data System (ADS)

    Illig, David W.; Jemison, William D.; Mullen, Linda J.

    2016-05-01

    This work demonstrates a new statistical approach towards backscatter "clutter" rejection for continuous-wave underwater lidar systems: independent component analysis. Independent component analysis is a statistical signal processing technique which can separate a return of interest from clutter in a statistical domain. After highlighting the statistical processing concepts, we demonstrate that underwater lidar target and backscatter returns have very different distributions, facilitating their separation in a statistical domain. Example profiles are provided showing the results of this separation, and ranging experiment results are presented. In the ranging experiment, performance is compared to a more conventional frequency-domain filtering approach. Target tracking is maintained to 14.5 attenuation lengths in the laboratory test tank environment, a 2.5 attenuation length improvement over the baseline.

  1. Deuterium incorporation in biomass cell wall components by NMR analysis

    SciTech Connect

    Foston, Marcus B; McGaughey, Joseph; O'Neill, Hugh Michael; Evans, Barbara R; Ragauskas, Arthur J

    2012-01-01

    A commercially available deuterated kale sample was analyzed for deuterium incorporation by ionic liquid solution 2H and 1H nuclear magnetic resonance (NMR). This protocol was found to effectively measure the percent deuterium incorporation at 33%, comparable to the 31% value determined by combustion. The solution NMR technique also suggested by a qualitative analysis that deuterium is preferentially incorporated into the carbohydrate components of the kale sample.

  2. Guidelines for Design and Analysis of Large, Brittle Spacecraft Components

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1993-01-01

    There were two related parts to this work. The first, conducted at The Aerospace Corporation was to develop and define methods for integrating the statistical theory of brittle strength with conventional finite element stress analysis, and to carry out a limited laboratory test program to illustrate the methods. The second part, separately funded at Aerojet Electronic Systems Division, was to create the finite element postprocessing program for integrating the statistical strength analysis with the structural analysis. The second part was monitored by Capt. Jeff McCann of USAF/SMC, as Special Study No.11, which authorized Aerojet to support Aerospace on this work requested by NASA. This second part is documented in Appendix A. The activity at Aerojet was guided by the Aerospace methods developed in the first part of this work. This joint work of Aerospace and Aerojet stemmed from prior related work for the Defense Support Program (DSP) Program Office, to qualify the DSP sensor main mirror and corrector lens for flight as part of a shuttle payload. These large brittle components of the DSP sensor are provided by Aerojet. This document defines rational methods for addressing the structural integrity and safety of large, brittle, payload components, which have low and variable tensile strength and can suddenly break or shatter. The methods are applicable to the evaluation and validation of such components, which, because of size and configuration restrictions, cannot be validated by direct proof test.

  3. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  4. Extension of a System Level Tool for Component Level Analysis

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  5. Multivariate concentration determination using principal component regression with residual analysis

    PubMed Central

    Keithley, Richard B.; Heien, Michael L.; Wightman, R. Mark

    2009-01-01

    Data analysis is an essential tenet of analytical chemistry, extending the possible information obtained from the measurement of chemical phenomena. Chemometric methods have grown considerably in recent years, but their wide use is hindered because some still consider them too complicated. The purpose of this review is to describe a multivariate chemometric method, principal component regression, in a simple manner from the point of view of an analytical chemist, to demonstrate the need for proper quality-control (QC) measures in multivariate analysis and to advocate the use of residuals as a proper QC method. PMID:20160977

  6. Prediction of p38 map kinase inhibitory activity of 3, 4-dihydropyrido [3, 2-d] pyrimidone derivatives using an expert system based on principal component analysis and least square support vector machine

    PubMed Central

    Shahlaei, M.; Saghaie, L.

    2014-01-01

    A quantitative structure–activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies. PMID:26339262

  7. Spectral Synthesis via Mean Field approach to Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ning; Su, Shan-Shan; Kong, Xu

    2016-03-01

    We apply a new statistical analysis technique, the Mean Field approach to Independent Component Analysis (MF-ICA) in a Bayseian framework, to galaxy spectral analysis. This algorithm can compress a stellar spectral library into a few Independent Components (ICs), and the galaxy spectrum can be reconstructed by these ICs. Compared to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, the MF-ICA approach offers a large improvement in efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter recovery for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters derived with galaxies from the Sloan Digital Sky Survey. We find that our MF-ICA method can not only fit the observed galaxy spectra efficiently, but can also accurately recover the physical parameters of galaxies. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find it can provide excellent fitting results for low signal-to-noise spectra.

  8. Identification of the isomers using principal component analysis (PCA) method

    NASA Astrophysics Data System (ADS)

    Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur

    2016-03-01

    In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.

  9. A comparative study of principal component analysis and independent component analysis in eddy current pulsed thermography data processing.

    PubMed

    Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W L

    2013-10-01

    Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection. PMID:24182145

  10. Recovering independent components from shifted data using fast independent component analysis and swarm intelligence.

    PubMed

    Rascon, Caleb; Lennox, Barry; Marjanovic, Ognjen

    2009-10-01

    Frequency displacement, or spectral shift, is commonly observed in industrial spectral measurements. It can be caused by many factors such as sensor de-calibration or by external influences, which include changes in temperature. The presence of frequency displacement in spectral measurements can cause difficulties when statistical techniques, such as independent component analysis (ICA), are used to analyze it. Using simulated spectral measurements, this paper initially highlights the effect that frequency displacement has on ICA. A post-processing technique, employing particle swarm optimization (PSO), is then proposed that enables ICA to become robust to frequency displacement in spectral measurements. The capabilities of the proposed approach are illustrated using several simulated examples and using tablet data from a pharmaceutical application. PMID:19843365

  11. Applications Of Nonlinear Principal Components Analysis To Behavioral Data.

    PubMed

    Hicks, M M

    1981-07-01

    A quadratic function was derived from variables believed to be nonlinearly related. The method was suggested by Gnanadesikan (1977) and based on an early paper of Karl Pearson (1901) (which gave rise to principal components), in which Pearson demonstrated that a plane of best fit to a system of points could be elicited from the elements of the eigenvector associated with the smallest eigenvalue of the covariance matrix. PMID:26815595

  12. Representation for dialect recognition using topographic independent component analysis

    NASA Astrophysics Data System (ADS)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  13. A least-squares framework for Component Analysis.

    PubMed

    De la Torre, Fernando

    2012-06-01

    Over the last century, Component Analysis (CA) methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Locality Preserving Projections (LPP), and Spectral Clustering (SC) have been extensively used as a feature extraction step for modeling, classification, visualization, and clustering. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and nonlinear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LPP, SC, and its kernel and regularized extensions correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS--WKRRR). The LS-WKRRR formulation of CA methods has several benefits: 1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; 2) yields efficient numerical schemes to solve CA techniques; 3) overcomes the small sample size problem; 4) provides a framework to easily extend CA methods. We derive weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques. PMID:21911913

  14. Near-infrared spectroscopy (NIRS) analysis of major components of milk and the development of analysis instrument

    NASA Astrophysics Data System (ADS)

    Liu, Jingwei; Ji, Zhongpeng; Tian, Mi

    2014-11-01

    In this study, we introduce a new spectroscopy analysis instrument, along with applied research based on the near-infrared spectroscopy (NIRS) of the major components of milk. Firstly, we analyzed and compared the characteristics of existing near-infrared spectrometers. Then, according to the major component spectra of milk, the spectral range, spectral resolution, and other parameters of the analysis instrument were determined, followed by the construction of a spectroscopy-analysis instrument based on acousto-optic tunable filters (AOTFs). Secondly, on the basis of application requirements, we obtained spectral information from a variety of test samples. Finally, qualitative and quantitative testing of the major components of the milk samples was carried out via typical analysis methods and a mathematical model of NIRS. Thus, this study provides a technical reference for the development of spectroscopy instruments and their applied research.

  15. Detection of plate components defects by surface wave based on transducer arrays

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Meng, Fanwu; Xu, Chunguang; Li, Xipeng; Zhou, Shiyuan; Xiao, Dingguo

    2013-01-01

    Detection of micro damages in flat components on-site has a significant sense for improving the safety of the equipment. Based on the theory of the surface acoustic wave (SAW) propagation laws in flat components, the micro damage detection in the flat component has been researched. Using wavelet analysis technology and inversed spectrum technology, the microdamages' feature parameters can be extracted out accurately. Utilizing the feature parameters got by every transducer in a transducer arrays, the micro-damages' image can be reconstructed, and the micro-damages' location, outer geometric configuration, and damage level can be showed clearly.

  16. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  17. Polycyclic Aromatic Aerosol Components: Chemical Analysis and Reactivity

    NASA Astrophysics Data System (ADS)

    Schauer, C.; Niessner, R.; Pöschl, U.

    Polycyclic aromatic hydrocarbons (PAHs) are ubiquitous environmental pollutants in the atmosphere and originate primarily from incomplete combustion of organic matter and fossil fuels. Their main sources are anthropogenic (e.g. vehicle emissions, domes- tic heating or tobacco smoke), and PAHs consisting of more than four fused aromatic rings reside mostly on combustion aerosol particles, where they can react with atmo- spheric trace gases like O3, NOx or OH radicals leading to a wide variety of partially oxidized and nitrated derivatives. Such chemical transformations can strongly affect the activity of the aerosol particles as condensation nuclei, their atmospheric residence times, and consequently their direct and indirect climatic effects. Moreover some poly- cyclic aromatic compounds (PACs = PAHs + derivatives) are known to have a high carcinogenic, mutagenic and allergenic potential, and are thus of major importance in air pollution control. Furthermore PACs can be used as well defined soot model sub- stances, since the basic structure of soot can be regarded as an agglomerate of highly polymerized PAC-layers. For the chemical analysis of polycyclic aromatic aerosol components a new analyti- cal method based on LC-APCI-MS has been developed, and a data base comprising PAHs, Oxy-PAHs and Nitro-PAHs has been established. Together with a GC-HRMS method it will be applied to identify and quantify PAHs and Nitro-PAHs in atmo- spheric aerosol samples, diesel exhaust particle samples and model soot samples from laboratory reaction kinetics and product studies. As reported before, the adsorption and surface reaction rate of ozone on soot and PAH-like particle surfaces is reduced by competitive adsorption of water vapor at low relative humidity (< 25 %). Recent results at higher relative humidities (ca. 50 %), however, indicate re-enhanced gas phase ozone loss, which may be due to absorbtion of ozone into an aqueous surface layer. The interaction of ozone and nitrogen

  18. A component based approach to scientific workflow management

    NASA Astrophysics Data System (ADS)

    Baker, N.; Brooks, P.; Kovacs, Z.; LeGoff, J.-M.; McClatchey, R.

    2001-08-01

    CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta-modelling techniques, to manage the evolving needs of a large physics community. Interest from such diverse communities as bio-informatics and manufacturing has motivated the CRISTAL team to re-engineer the system to customize functionality according to end user requirements but maximize software reuse in the process. The next generation CRISTAL vision is to build a generic component architecture from which a complete software product line can be generated according to the particular needs of the target enterprise. This paper discusses the issues of adopting a component product line based approach and our experiences of software reuse.

  19. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  20. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  1. Removing Milky Way from airglow images using principal component analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhenhua; Liu, Alan; Sivjee, Gulamabas G.

    2014-04-01

    Airglow imaging is an effective way to obtain atmospheric gravity wave information in the airglow layers in the upper mesosphere and the lower thermosphere. Airglow images are often contaminated by the Milky Way emission. To extract gravity wave parameters correctly, the Milky Way must be removed. The paper demonstrates that principal component analysis (PCA) can effectively represent the dominant variation patterns of the intensity of airglow images that are associated with the slow moving Milky Way features. Subtracting this PCA reconstructed field reveals gravity waves that are otherwise overwhelmed by the strong spurious waves associated with the Milky Way. Numerical experiments show that nonstationary gravity waves with typical wave amplitudes and persistences are not affected by the PCA removal because the variances contributed by each wave event are much smaller than the ones in the principal components.

  2. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    SciTech Connect

    Whalen, Michael R.; /Stevens Tech. /SLAC

    2011-06-22

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/{sup o}C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  3. Biochemical component identification by plasmonic improved whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-05-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins, microelements, antibiotic of different generation etc. in both single and multi component solutions under varied in wide range concentration analyzed on the light scattering parameters of whispering gallery mode optical resonance based sensor are represented. Multiplexing on parameters and components has been realized using developed fluidic sensor cell with fixed in adhesive layer dielectric microspheres and data processing. Biochemical component identification has been performed by developed network analysis techniques. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis. Novel technique based on optical resonance on microring structures, plasmon resonance and identification tools has been developed. To improve a sensitivity of microring structures microspheres fixed by adhesive had been treated previously by gold nanoparticle solution. Another technique used thin film gold layers deposited on the substrate below adhesive. Both biomolecule and nanoparticle injections caused considerable changes of optical resonance spectra. Plasmonic gold layers under optimized thickness also improve parameters of optical resonance spectra. Biochemical component identification has been also performed by developed network analysis techniques both for single and for multi component solution. So advantages of plasmon enhancing optical microcavity resonance with multiparameter identification tools is used for development of a new platform for ultra sensitive label-free biomedical sensor.

  4. Principal components granulometric analysis of tidally dominated depositional environments

    SciTech Connect

    Mitchell, S.W. ); Long, W.T. ); Friedrich, N.E. )

    1991-02-01

    Sediments often are investigated by using mechanical sieve analysis (at 1/4 or 1/2{phi} intervals) to identify differences in weight-percent distributions between related samples, and thereby, to deduce variations in sediment sources and depositional processes. Similar granulometric data from groups of surface samples from two siliciclastic estuaries and one carbonate tidal creek have been clustered using principal components analysis. Subtle geographic trends in tidally dominated depositional processes and in sediment sources can be inferred from the clusters. In Barnstable Harbor, Cape Cod, Massachusetts, the estuary can be subdivided into five major subenvironments, with tidal current intensities/directions and sediment sources (longshore transport or sediments weathering from the Sandwich Moraine) as controls. In Morro Bay, San Luis Obispo county, California, all major environments (beach, dune, bay, delta, and fluvial) can be easily distinguished; a wide variety of subenvironments can be recognized. On Pigeon Creek, San Salvador Island, Bahamas, twelve subenvironments can be recognized. Biogenic (Halimeda, Peneroplios, mixed skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis of eroding Pleistocene limestone) are grain types which dominate. When combined with tidal current intensities/directions, grain sources produce subenvironments distributed parallel to tidal channels. The investigation of the three modern environments indicates that principal components granulometric analysis is potentially a useful tool in recognizing subtle changes in transport processes and sediment sources preserved in ancient depositional sequences.

  5. Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis

    PubMed Central

    Rochon, Kateryn; Duehl, Adrian J.; Anderson, John F.; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C.; Obenauer, Peter J.; Campbell, James F.; Lysyk, Tim J.; Allan, Sandra A.

    2015-01-01

    Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthropod monitoring technology, techniques, and analysis” presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles. PMID:26543242

  6. Analysis of Residual Dependencies of Independent Components Extracted from fMRI Data

    PubMed Central

    Vanello, N.; Ricciardi, E.; Landini, L.

    2016-01-01

    Independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data can be employed as an exploratory method. The lack in the ICA model of strong a priori assumptions about the signal or about the noise leads to difficult interpretations of the results. Moreover, the statistical independence of the components is only approximated. Residual dependencies among the components can reveal informative structure in the data. A major problem is related to model order selection, that is, the number of components to be extracted. Specifically, overestimation may lead to component splitting. In this work, a method based on hierarchical clustering of ICA applied to fMRI datasets is investigated. The clustering algorithm uses a metric based on the mutual information between the ICs. To estimate the similarity measure, a histogram-based technique and one based on kernel density estimation are tested on simulated datasets. Simulations results indicate that the method could be used to cluster components related to the same task and resulting from a splitting process occurring at different model orders. Different performances of the similarity measures were found and discussed. Preliminary results on real data are reported and show that the method can group task related and transiently task related components. PMID:26839530

  7. Analysis of Residual Dependencies of Independent Components Extracted from fMRI Data.

    PubMed

    Vanello, N; Ricciardi, E; Landini, L

    2016-01-01

    Independent component analysis (ICA) of functional magnetic resonance imaging (fMRI) data can be employed as an exploratory method. The lack in the ICA model of strong a priori assumptions about the signal or about the noise leads to difficult interpretations of the results. Moreover, the statistical independence of the components is only approximated. Residual dependencies among the components can reveal informative structure in the data. A major problem is related to model order selection, that is, the number of components to be extracted. Specifically, overestimation may lead to component splitting. In this work, a method based on hierarchical clustering of ICA applied to fMRI datasets is investigated. The clustering algorithm uses a metric based on the mutual information between the ICs. To estimate the similarity measure, a histogram-based technique and one based on kernel density estimation are tested on simulated datasets. Simulations results indicate that the method could be used to cluster components related to the same task and resulting from a splitting process occurring at different model orders. Different performances of the similarity measures were found and discussed. Preliminary results on real data are reported and show that the method can group task related and transiently task related components. PMID:26839530

  8. Vertical beam emittance correction with independent component analysis measurement method

    NASA Astrophysics Data System (ADS)

    Wang, Fei

    The storage ring performance is determined by the vertical beam size, that is by the vertical emittance, which is determined by two factors: the vertical dispersion generated in the bending magnets, and the coupling of the oscillations in the vertical and horizontal plane. In this dissertation, a detailed study of the main source of the vertical emittance and effective correction methods are presented. Simulations show that the vertical emittance is dominated by the contribution due to photon emission with non-zero vertical dispersion in bending magnets. An effective method to make vertical dispersion correction is to analysis the harmonics of the vertical dispersion and to eliminate the largest components of the stopband integral with harmonics near the vertical betatron tune. A stopband correction scheme is being implemented in which the excitation of skew-quadrupole correctors is determined from measurements of the resonance strengths (stopband widths) of major resonances. This method can correct the vertical dispersion function and the coupling strength simultaneously without identifying the source of errors. Studies show the coupling strength and the vertical dispersion can be controlled individually in the quadruple-bend achromatic low emittance lattice. Resulting improvement in machine performance is that the equilibrium vertical emittance is reduced by the factor of 7. Effective correction depends on precise beam measurements. Independent component analysis for BPM turn-by-turn data has shown the potential to be a useful tool for diagnostics and optics verification. The effectiveness of employing the independent component analysis (ICA) method to measure the vertical dispersion function is studied. This method for extracting the beta function and phase advance for the beam position monitors is presented. The accuracy of optical functions thus calculated is affected by different factors in a different manner. The most influent factors on the accuracy are

  9. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    NASA Astrophysics Data System (ADS)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  10. Multiplex component-based allergen microarray in recent clinical studies.

    PubMed

    Patelis, A; Borres, M P; Kober, A; Berthold, M

    2016-08-01

    During the last decades component-resolved diagnostics either as singleplex or multiplex measurements has been introduced into the field of clinical allergology, providing important information that cannot be obtained from extract-based tests. Here we review recent studies that demonstrate clinical applications of the multiplex microarray technique in the diagnosis and risk assessment of allergic patients, and its usefulness in studies of allergic diseases. The usefulness of ImmunoCAP ISAC has been validated in a wide spectrum of allergic diseases like asthma, allergic rhinoconjunctivitis, atopic dermatitis, eosinophilic esophagitis, food allergy and anaphylaxis. ISAC provides a broad picture of a patient's sensitization profile from a single test, and provides information on specific and cross-reactive sensitizations that facilitate diagnosis, risk assessment, and disease management. Furthermore, it can reveal unexpected sensitizations which may explain anaphylaxis previously categorized as idiopathic and also display for the moment clinically non-relevant sensitizations. ISAC can facilitate a better selection of relevant allergens for immunotherapy compared with extract testing. Microarray technique can visualize the allergic march and molecular spreading in the preclinical stages of allergic diseases, and may indicate that the likelihood of developing symptomatic allergy is associated with specific profiles of sensitization to allergen components. ISAC is shown to be a useful tool in routine allergy diagnostics due to its ability to improve risk assessment, to better select relevant allergens for immunotherapy as well as detecting unknown sensitization. Multiplex component testing is especially suitable for patients with complex symptomatology. PMID:27196983

  11. Patch testing with components of water-based metalworking fluids.

    PubMed

    Geier, Johannes; Lessmann, Holger; Frosch, Peter J; Pirker, Claudia; Koch, Patrick; Aschoff, Roland; Richter, Gerhard; Becker, Detlef; Eckert, Christian; Uter, Wolfgang; Schnuch, Axel; Fuchs, Thomas

    2003-08-01

    Water-based metalworking fluids (MWFs) may cause both irritant and allergic contact dermatitis. Several well-known MWF allergens are available for patch testing, but considering the wide variety of possible components used in MWF, our diagnostic arsenal covers only a small part of potential allergens. We therefore selected 13 frequently used MWF components that might be sensitizers and had not yet been tested routinely. In 5 centres, 233 dermatitis patients with present or past occupational exposure to MWF were patch tested with this and other panels. Only 7 patients showed positive reactions to the study panel. Allergic reactions to the emulsifier diglycolamine [syn. 2-(2-aminoethoxy) ethanol] were seen in 5 patients, and 1 patient each reacted positively to 2-amino-2-ethyl-1,3-propanediol (AEPD) and methyldiethanolamine (MDEA). Clinical relevance of the reactions to diglycolamine was unequivocally proven by its presence in the MWF from the patients' workplace in 3 cases. Diglycolamine seems to be an important MWF allergen, independently from monoethanolamine and diethanolamine. A test concentration of 1% petrolatum (pet.) appears to be appropriate. The importance of AEPD and MDEA as MWF allergens still remains to be established. The lack of positive test reactions to the other MWF components tested may be due to their low-sensitizing potential or too low a patch test concentration being used. PMID:14641356

  12. Analysis and test of insulated components for rotary engine

    NASA Technical Reports Server (NTRS)

    Badgley, Patrick R.; Doup, Douglas; Kamo, Roy

    1989-01-01

    The direct-injection stratified-charge (DISC) rotary engine, while attractive for aviation applications due to its light weight, multifuel capability, and potentially low fuel consumption, has until now required a bulky and heavy liquid-cooling system. NASA-Lewis has undertaken the development of a cooling system-obviating, thermodynamically superior adiabatic rotary engine employing state-of-the-art thermal barrier coatings to thermally insulate engine components. The thermal barrier coating material for the cast aluminum, stainless steel, and ductile cast iron components was plasma-sprayed zirconia. DISC engine tests indicate effective thermal barrier-based heat loss reduction, but call for superior coefficient-of-thermal-expansion matching of materials and better tribological properties in the coatings used.

  13. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  14. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  15. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  16. Optimization design and simulation analysis for the key components of 1m aperture photoelectric theodolite

    NASA Astrophysics Data System (ADS)

    San, Xiao-gang; Qiao, Yan-feng; Yu, Shuaibei; Wang, Tao; Tang, Jie

    2014-09-01

    Taking a 1m aperture photoelectric theodolite as study object, its key components including four-way, turntable and base are structural optimized so as to improve structural rigidity while reducing structural mass. First, various components' working characteristics and relationships with the other parts are studied, based on these, reasonable finite element model of these components are established, then each component's optimal material topology are obtained by continuum topology optimization. According to structural topology, lightweight truss structure models are constructed and the models' key parameters are optimized in size. Finally, the structures optimized are verified by finite element analysis. Analysis prove that comparing to traditional structure, lightweight structures of theodolite's three key components can reduce mass up to 1095.2kg, and increase ratio of stiffness to mass. Meanwhile, for other indexes such as maximum stress, static deformation and first-order natural frequency, lightweight structures also have better performance than traditional structure. After alignment, angular shaking error of theodolite's horizontal axis is tested by autocollimator, the results are: maximum error is υ =1.82″, mean square error is σ =0.62″. Further, angular shaking error of theodolite's vertical axis is tested by 0.2″ gradienter, the results are: maximum error is υ =1.97″, mean square error is σ =0.706″. The results of all these analysis and tests fully prove that the optimized lightweight key components of this 1m aperture theodolite are reasonable and effective to satisfy this instrument's requirements.

  17. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  18. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  19. [Discrimination of Red Tide algae by fluorescence spectra and principle component analysis].

    PubMed

    Su, Rong-guo; Hu, Xu-peng; Zhang, Chuan-song; Wang, Xiu-lin

    2007-07-01

    Fluorescence discrimination technology for 11 species of the Red Tide algae at genus level was constructed by principle component analysis and non-negative least squares. Rayleigh and Raman scattering peaks of 3D fluorescence spectra were eliminated by Delaunay triangulation method. According to the results of Fisher linear discrimination, the first principle component score and the second component score of 3D fluorescence spectra were chosen as discriminant feature and the feature base was established. The 11 algae species were tested, and more than 85% samples were accurately determinated, especially for Prorocentrum donghaiense, Skeletonema costatum, Gymnodinium sp., which have frequently brought Red tide in the East China Sea. More than 95% samples were right discriminated. The results showed that the genus discriminant feature of 3D fluorescence spectra of Red Tide algae given by principle component analysis could work well. PMID:17891964

  20. Hurricane properties by principal component analysis of Doppler radar data

    NASA Astrophysics Data System (ADS)

    Harasti, Paul Robert

    A novel approach based on Principal Component Analysis (PCA) of Doppler radar data establishes hurricane properties such as the positions of the circulation centre and wind maxima. The method was developed in conjunction with a new Doppler radar wind model for both mature and weak hurricanes. The tangential wind (Vt) is modeled according to Vtζx = constant, where ζ is the radius, and x is an exponent. The maximum Vt occurs at the Radius of Maximum Wind (RMW). For the mature (weak) hurricane case, x = 1 ( x < 1) within the RMW, and x = 0.5 ( x = 0) beyond the RMW. The radial wind is modeled in a similar fashion in the radial direction with up to two transition radii but it is also varied linearly in the vertical direction. This is the first Doppler radar wind model to account for the vertical variations in the radial wind. The new method employs an S2-mode PCA on the Doppler velocity data taken from a single PPI scan and arranged sequentially in a matrix according to their azimuth and range coordinates. The first two eigenvectors of both the range and azimuth eigenspaces represent over 95% of the total variance in the modeled data; one eigenvector from each pair is analyzed separately to estimate particular hurricane properties. These include the bearing and range to the hurricane's circulation centre, the RMW, and the transition radii of the radial wind. Model results suggest that greater accuracy is achievable and fewer restrictions apply in comparison to other methods. The PCA method was tested on the Doppler velocity data of Hurricane Erin (1995) and Typhoon Alex (1987). In both cases, the similarity of the eigenvectors to their theoretical counterparts was striking even in the presence of significant missing data. Results from Hurricane Erin were in agreement with concurrent aircraft observations of the wind centre corrected for the storm motion. Such information was not available for Typhoon Alex, however, the results agreed with those from other methods

  1. V-Lab{trademark}: Virtual laboratories -- The analysis tool for structural analysis of composite components

    SciTech Connect

    1999-07-01

    V-Lab{trademark}, an acronym for Virtual Laboratories, is a design and analysis tool for fiber-reinforced composite components. This program allows the user to perform analysis, numerical experimentation, and design prototyping using advanced composite stress and failure analysis tools. The software was designed to be intuitive and easy to use, even by designers who are not experts in composite materials or structural analysis. V-Lab{trademark} is the software tool every specialist in design engineering, structural analysis, research and development and repair needs to perform accurate, fast and economical analysis of composite components.

  2. Analysis of Fission Products on the AGR-1 Capsule Components

    SciTech Connect

    Paul A. Demkowicz; Jason M. Harp; Philip L. Winston; Scott A. Ploger

    2013-03-01

    The components of the AGR-1 irradiation capsules were analyzed to determine the retained inventory of fission products in order to determine the extent of in-pile fission product release from the fuel compacts. This includes analysis of (i) the metal capsule components, (ii) the graphite fuel holders, (iii) the graphite spacers, and (iv) the gas exit lines. The fission products most prevalent in the components were Ag-110m, Cs 134, Cs 137, Eu-154, and Sr 90, and the most common location was the metal capsule components and the graphite fuel holders. Gamma scanning of the graphite fuel holders was also performed to determine spatial distribution of Ag-110m and radiocesium. Silver was released from the fuel components in significant fractions. The total Ag-110m inventory found in the capsules ranged from 1.2×10 2 (Capsule 3) to 3.8×10 1 (Capsule 6). Ag-110m was not distributed evenly in the graphite fuel holders, but tended to concentrate at the axial ends of the graphite holders in Capsules 1 and 6 (located at the top and bottom of the test train) and near the axial center in Capsules 2, 3, and 5 (in the center of the test train). The Ag-110m further tended to be concentrated around fuel stacks 1 and 3, the two stacks facing the ATR reactor core and location of higher burnup, neutron fluence, and temperatures compared with Stack 2. Detailed correlation of silver release with fuel type and irradiation temperatures is problematic at the capsule level due to the large range of temperatures experienced by individual fuel compacts in each capsule. A comprehensive Ag 110m mass balance for the capsules was performed using measured inventories of individual compacts and the inventory on the capsule components. For most capsules, the mass balance was within 11% of the predicted inventory. The Ag-110m release from individual compacts often exhibited a very large range within a particular capsule.

  3. The 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Mcknight, R. L.

    1983-01-01

    The objective of this research is to develop an analytical tool capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. The techniques developed must be capable of accommodating large excursions in temperatures with the associated variations in material properties including plasticity and creep. The overall objective of this proposed program is to develop advanced 3-D inelastic structural/stress analysis methods and solution strategies for more accurate and yet more cost effective analysis of combustors, turbine blades, and vanes. The approach will be to develop four different theories, one linear and three higher order with increasing complexities including embedded singularities.

  4. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. PMID:26081920

  5. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; Moseley, S. H.; Porst, J.-P.; Porter, F. S.; Sadleir, J. E.; Smith, S. J.

    2016-07-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an ^{55}Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  6. Bootstrap confidence intervals in multi-level simultaneous component analysis.

    PubMed

    Timmerman, Marieke E; Kiers, Henk A L; Smilde, Age K; Ceulemans, Eva; Stouten, Jeroen

    2009-05-01

    Multi-level simultaneous component analysis (MLSCA) was designed for the exploratory analysis of hierarchically ordered data. MLSCA specifies a component model for each level in the data, where appropriate constraints express possible similarities between groups of objects at a certain level, yielding four MLSCA variants. The present paper discusses different bootstrap strategies for estimating confidence intervals (CIs) on the individual parameters. In selecting a proper strategy, the main issues to address are the resampling scheme and the non-uniqueness of the parameters. The resampling scheme depends on which level(s) in the hierarchy are considered random, and which fixed. The degree of non-uniqueness depends on the MLSCA variant, and, in two variants, the extent to which the user exploits the transformational freedom. A comparative simulation study examines the quality of bootstrap CIs of different MLSCA parameters. Generally, the quality of bootstrap CIs appears to be good, provided the sample sizes are sufficient at each level that is considered to be random. The latter implies that if more than a single level is considered random, the total number of observations necessary to obtain reliable inferential information increases dramatically. An empirical example illustrates the use of bootstrap CIs in MLSCA. PMID:18086338

  7. Application of curvilinear component analysis to chaos game representation images of genome

    NASA Astrophysics Data System (ADS)

    Vilain, Joseph; Giron, Alain; Brahmi, Djamel; Deschavanne, Patrick; Fertil, Bernard

    1999-03-01

    Curvilinear component analysis (CCA) is performed by an original self-organized neural network, which provides a convenient approach for dimension reduction and data exploration. It consists in a non-linear, preserving distances projection of a set of quantizing vectors describing the input space. The CCA technique is applied to the analysis of CGR fractal images of DNA sequences from different species. The CGR method produces images where pixels represent frequency of small sequences of bases revealing nested patterns in DNA sequences.

  8. Component-based handprint segmentation using adaptive writing style model

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.

    1997-04-01

    Building upon the utility of connected components, NIST has designed a new character segmentor based on statistically modeling the style of a person's handwriting. Simple spatial features capture the characteristics of a particular writer's style of handprint, enabling the new method to maintain a traditional character-level segmentation philosophy without the integration of recognition or the use of oversegmentation and linguistic postprocessing. Estimates for stroke width and character height are used to compute aspect ratio and standard stroke count features that adapt to the writer's style at the field level. The new method has been developed with a predetermined set of fuzzy rules making the segmentor much less fragile and much more adaptive, and the new method successfully reconstructs fragmented characters as well as splits touching characters. The new segmentor was integrated into the NIST public domain form-based handprint recognition systems and then tested on a set of 490 handwriting sample forms found in NIST special database 19. When compared to a simple component-based segmentor, the new adaptable method improved the overall recognition of handprinted digits by 3.4 percent and field level recognition by 6.9 percent, while effectively reducing deletion errors by 82 percent. The same program code and set of parameters successfully segments sequences of uppercase and lowercase characters without any context-based tuning. While not as dramatic as digits, the recognition of uppercase and lowercase characters improved by 1.7 percent and 1.3 percent respectively. The segmentor maintains a relatively straight-forward and logical process flow avoiding convolutions of encoded exceptions as is common in expert systems. As a result, the new segmentor operates very efficiently, and throughput as high as 362 characters per second can be achieved. Letters and numbers are constructed from a predetermined configuration of a relatively small number of strokes. Results

  9. Efficacy-oriented compatibility for component-based Chinese medicine

    PubMed Central

    Zhang, Jun-hua; Zhu, Yan; Fan, Xiao-hui; Zhang, Bo-li

    2015-01-01

    Single-target drugs have not achieved satisfactory therapeutic effects for complex diseases involving multiple factors. Instead, innovations in recent drug research and development have revealed the emergence of compound drugs, such as cocktail therapies and “polypills”, as the frontier in new drug development. A traditional Chinese medicine (TCM) prescription that is usually composed of several medicinal herbs can serve a typical representative of compound medicines. Although the traditional compatibility theory of TCM cannot be well expressed using modern scientific language nowadays, the fundamental purpose of TCM compatibility can be understood as promoting efficacy and reducing toxicity. This paper introduces the theory and methods of efficacy-oriented compatibility for developing component-based Chinese medicines. PMID:25864650

  10. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    NASA Astrophysics Data System (ADS)

    Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie

    2013-08-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches.

  11. Autonomous learning in gesture recognition by using lobe component analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  12. Revisiting AVHRR tropospheric aerosol trends using principal component analysis

    NASA Astrophysics Data System (ADS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-03-01

    The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.

  13. Demixed principal component analysis of neural population data

    PubMed Central

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-01-01

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378

  14. Derivation of Boundary Manikins: A Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  15. Revisiting AVHRR Tropospheric Aerosol Trends Using Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-01-01

    The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.

  16. Prognostic Health Monitoring System: Component Selection Based on Risk Criteria and Economic Benefit Assessment

    SciTech Connect

    Binh T. Pham; Vivek Agarwal; Nancy J Lybeck; Magdy S Tawfik

    2012-05-01

    Prognostic health monitoring (PHM) is a proactive approach to monitor the ability of structures, systems, and components (SSCs) to withstand structural, thermal, and chemical loadings over the SSCs planned service lifespans. The current efforts to extend the operational license lifetime of the aging fleet of U.S. nuclear power plants from 40 to 60 years and beyond can benefit from a systematic application of PHM technology. Implementing a PHM system would strengthen the safety of nuclear power plants, reduce plant outage time, and reduce operation and maintenance costs. However, a nuclear power plant has thousands of SSCs, so implementing a PHM system that covers all SSCs requires careful planning and prioritization. This paper therefore focuses on a component selection that is based on the analysis of a component's failure probability, risk, and cost. Ultimately, the decision on component selection depend on the overall economical benefits arising from safety and operational considerations associated with implementing the PHM system.

  17. A Local Learning Rule for Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Isomura, Takuya; Toyoizumi, Taro

    2016-06-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering.

  18. A Local Learning Rule for Independent Component Analysis

    PubMed Central

    Isomura, Takuya; Toyoizumi, Taro

    2016-01-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661

  19. A Local Learning Rule for Independent Component Analysis.

    PubMed

    Isomura, Takuya; Toyoizumi, Taro

    2016-01-01

    Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661

  20. Functional principal components analysis of workload capacity functions.

    PubMed

    Burns, Devin M; Houpt, Joseph W; Townsend, James T; Endres, Michael J

    2013-12-01

    Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals. PMID:23475829

  1. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis. PMID:21495371

  2. Analysis of Performance of Jet Engine from Characteristics of Components II : Interaction of Components as Determined from Engine Operation

    NASA Technical Reports Server (NTRS)

    Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl

    1949-01-01

    In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.

  3. Blind separation of human- and horse-footstep signatures using independent component analysis

    NASA Astrophysics Data System (ADS)

    Mehmood, Asif; Damarla, Thyagaraju

    2012-06-01

    Seismic footstep detection based systems for homeland security applications are important to perimeter protection and other security systems. This paper reports seismic footstep signal separation for a walking horse and a walking human. The well-known Independent Component Analysis (ICA) approach is employed to accomplish this task. ICA techniques have become widely used in audio analysis and source separation. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. They can also be used in conjunction with a classification method to achieve a high percentage of correct classification and reduce false alarms. In this paper, an ICA based algorithm is developed and implemented on seismic data of human and horse footsteps. The performance of this method is very promising and is demonstrated by the experimental results.

  4. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  5. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  6. Analysis of Femoral Components of Cemented Total Hip Arthroplasty

    NASA Astrophysics Data System (ADS)

    Singh, Shantanu; Harsha, A. P.

    2015-10-01

    There have been continuous on-going revisions in design of prosthesis in Total Hip Arthroplasty (THA) to improve the endurance of hip replacement. In the present work, Finite Element Analysis was performed on cemented THA with CoCrMo trapezoidal, CoCrMo circular, Ti6Al4V trapezoidal and Ti6Al4V circular stem. It was observed that cross section and material of femoral stem proved to be critical parameters for stress distribution in femoral components, distribution of interfacial stress and micro movements. In the first part of analysis, designs were investigated for micro movements and stress developed, for different stem materials. Later part of the analysis focused on investigations with respect to different stem cross sections. Femoral stem made of Titanium alloy (Ti6Al4V) resulted in larger debonding of stem at cement-stem interface and increased stress within the cement mantle in contrast to chromium alloy (CoCrMo) stem. Thus, CoCrMo proved to be a better choice for cemented THA. Comparison between CoCrMo femoral stem of trapezium and circular cross section showed that trapezoidal stem experiences lesser sliding and debonding at interfaces than circular cross section stem. Also, trapezium cross section generated lower peak stress in femoral stem and cortical femur. In present study, femur head with diameter of 36 mm was considered for the analysis in order to avoid dislocation of the stem. Also, metallic femur head was coupled with cross linked polyethylene liner as it experiences negligible wear compared to conventional polyethylene liner and unlike metallic liner it is non carcinogenic.

  7. Dimension reduction of non-equilibrium plasma kinetic models using principal component analysis

    NASA Astrophysics Data System (ADS)

    Peerenboom, Kim; Parente, Alessandro; Kozák, Tomáš; Bogaerts, Annemie; Degrez, Gérard

    2015-04-01

    The chemical complexity of non-equilibrium plasmas poses a challenge for plasma modeling because of the computational load. This paper presents a dimension reduction method for such chemically complex plasmas based on principal component analysis (PCA). PCA is used to identify a low-dimensional manifold in chemical state space that is described by a small number of parameters: the principal components. Reduction is obtained since continuity equations only need to be solved for these principal components and not for all the species. Application of the presented method to a CO2 plasma model including state-to-state vibrational kinetics of CO2 and CO demonstrates the potential of the PCA method for dimension reduction. A manifold described by only two principal components is able to predict the CO2 to CO conversion at varying ionization degrees very accurately.

  8. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    PubMed Central

    Hesse, Morten

    2005-01-01

    Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a) a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133), and (b) a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers. PMID:15910688

  9. Residual Strength Analysis Methodology: Laboratory Coupons to Structural Components

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Newman, J. C., Jr.; Starnes, J. H., Jr.; Rose, C. A.; Young, R. D.; Seshadri, B. R.

    2000-01-01

    The NASA Aircraft Structural Integrity (NASIP) and Airframe Airworthiness Assurance/Aging Aircraft (AAA/AA) Programs have developed a residual strength prediction methodology for aircraft fuselage structures. This methodology has been experimentally verified for structures ranging from laboratory coupons up to full-scale structural components. The methodology uses the critical crack tip opening angle (CTOA) fracture criterion to characterize the fracture behavior and a material and a geometric nonlinear finite element shell analysis code to perform the structural analyses. The present paper presents the results of a study to evaluate the fracture behavior of 2024-T3 aluminum alloys with thickness of 0.04 inches to 0.09 inches. The critical CTOA and the corresponding plane strain core height necessary to simulate through-the-thickness effects at the crack tip in an otherwise plane stress analysis, were determined from small laboratory specimens. Using these parameters, the CTOA fracture criterion was used to predict the behavior of middle crack tension specimens that were up to 40 inches wide, flat panels with riveted stiffeners and multiple-site damage cracks, 18-inch diameter pressurized cylinders, and full scale curved stiffened panels subjected to internal pressure and mechanical loads.

  10. Analysis of adaptive laser scanning optical system with focus-tunable components

    NASA Astrophysics Data System (ADS)

    Pokorný, P.; Mikš, A.; Novák, J.; Novák, P.

    2015-05-01

    This work presents a primary analysis of an adaptive laser scanner based on two-mirror beam-steering device and focustunable components (lenses with tunable focal length). It is proposed an optical scheme of an adaptive laser scanner, which can focus the laser beam in a continuous way to a required spatial position using the lens with tunable focal length. This work focuses on a detailed analysis of the active optical or opto-mechanical components (e.g. focus-tunable lenses) mounted in the optical systems of laser scanners. The algebraic formulas are derived for ray tracing through different configurations of the scanning optical system and one can calculate angles of scanner mirrors and required focal length of the tunable-focus component provided that the position of the focused beam in 3D space is given with a required tolerance. Computer simulations of the proposed system are performed using MATLAB.

  11. Principal component analysis of indocyanine green fluorescence dynamics for diagnosis of vascular diseases

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee

    2015-03-01

    Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.

  12. GNSS Vertical Coordinate Time Series Analysis Using Single-Channel Independent Component Analysis Method

    NASA Astrophysics Data System (ADS)

    Peng, Wei; Dai, Wujiao; Santerre, Rock; Cai, Changsheng; Kuang, Cuilin

    2016-05-01

    Daily vertical coordinate time series of Global Navigation Satellite System (GNSS) stations usually contains tectonic and non-tectonic deformation signals, residual atmospheric delay signals, measurement noise, etc. In geophysical studies, it is very important to separate various geophysical signals from the GNSS time series to truthfully reflect the effect of mass loadings on crustal deformation. Based on the independence of mass loadings, we combine the Ensemble Empirical Mode Decomposition (EEMD) with the Phase Space Reconstruction-based Independent Component Analysis (PSR-ICA) method to analyze the vertical time series of GNSS reference stations. In the simulation experiment, the seasonal non-tectonic signal is simulated by the sum of the correction of atmospheric mass loading and soil moisture mass loading. The simulated seasonal non-tectonic signal can be separated into two independent signals using the PSR-ICA method, which strongly correlated with atmospheric mass loading and soil moisture mass loading, respectively. Likewise, in the analysis of the vertical time series of GNSS reference stations of Crustal Movement Observation Network of China (CMONOC), similar results have been obtained using the combined EEMD and PSR-ICA method. All these results indicate that the EEMD and PSR-ICA method can effectively separate the independent atmospheric and soil moisture mass loading signals and illustrate the significant cause of the seasonal variation of GNSS vertical time series in the mainland of China.

  13. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The contributions of three major gasoline blending components (reformate, alkylate and cracked gasoline) to potential environmental impacts are assessed. This study estimates losses of the gasoline blending components due to evaporation and leaks through their life cycle, from pe...

  14. Electric field gradients in Hg compounds: molecular orbital (MO) analysis and comparison of 4-component and 2-component (ZORA) methods.

    PubMed

    Arcisauskaite, Vaida; Knecht, Stefan; Sauer, Stephan P A; Hemmingsen, Lars

    2012-12-14

    We examine the performance of Density Functional Theory (DFT) approaches based on the Zeroth-Order Regular Approximation (ZORA) Hamiltonian (with and without inclusion of spin-orbit coupling) for predictions of electric field gradients (EFGs) at the heavy atom Hg nucleus. This is achieved by comparing with benchmark DFT and CCSD-T data (Arcisauskaite et al., Phys. Chem. Chem. Phys., 2012, 14, 2651-2657) obtained from 4-component Dirac-Coulomb Hamiltonian calculations. The investigated set of molecules comprises linear HgL(2) (L = Cl, Br, I, CH(3)) and bent HgCl(2) mercury compounds as well as the trigonal planar [HgCl(3)](-) system. In 4-component calculations we used the dyall.cv3z basis set for Hg, Br, I and the cc-pCVTZ basis set for H, C, Cl, whereas in ZORA calculations we used the QZ4P basis set for all the atoms. ZORA-4 reproduces the fully relativistic 4-component DFT reference values within 6% for all studied Hg compounds and employed functionals (BH&H, BP86, PBE0), whereas scalar relativistic (SR)-ZORA-4 results show deviations of up to 15%. Compared to our 4-component CCSD-T benchmark the BH&H functional performs best at both 4-component and ZORA levels. We furthermore observe that changes in the largest component of the diagonalised EFG tensor, V(zz), of linear HgCl(2) show a slightly stronger dependence than the r(-3) scaling upon bond length r(Hg-Cl) alterations. The 4-component/BH&H V(zz) value of -9.26 a.u. for a bent HgCl(2) (∠Cl-Hg-Cl = 120°) is close to -9.60 a.u. obtained for the linear HgCl(2) structure. Thus a point charge model for EFG calculations completely fails in this case. By means of a projection analysis of molecular orbital (MO) contributions to V(zz) in terms of the atomic constituents, we conclude that this is due to the increased importance of the Hg 5d orbitals upon bending HgCl(2) compared to the linear HgCl(2) structure. Changing ligand leads to only minor changes in V(zz) (from -9.60 a.u. (HgCl(2)) to -8.85 a.u. (HgI(2)) at

  15. Biochemical component identification by light scattering techniques in whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-03-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins (albumin, interferon, C reactive protein), microelements (Na+, Ca+), antibiotic of different generations, in both single and multi component solutions under varied in wide range concentration are represented. Analysis has been performed on the light scattering parameters of whispering gallery mode (WGM) optical resonance based sensor with dielectric microspheres from glass and PMMA as sensitive elements fixed by spin - coating techniques in adhesive layer on the surface of substrate or directly on the coupling element. Sensitive layer was integrated into developed fluidic cell with a digital syringe. Light from tuneable laser strict focusing on and scattered by the single microsphere was detected by a CMOS camera. The image was filtered for noise reduction and integrated on two coordinates for evaluation of integrated energy of a measured signal. As the entrance data following signal parameters were used: relative (to a free spectral range) spectral shift of frequency of WGM optical resonance in microsphere and relative efficiency of WGM excitation obtained within a free spectral range which depended on both type and concentration of investigated agents. Multiplexing on parameters and components has been realized using spatial and spectral parameters of scattered by microsphere light with developed data processing. Biochemical component classification and identification of agents under investigation has been performed by network analysis techniques based on probabilistic network and multilayer perceptron. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis.

  16. Quince (Cydonia oblonga miller) fruit characterization using principal component analysis.

    PubMed

    Silva, Branca M; Andrade, Paula B; Martins, Rui C; Valentão, Patrícia; Ferreres, Federico; Seabra, Rosa M; Ferreira, Margarida A

    2005-01-12

    This paper presents a large amount of data on the composition of quince fruit with regard to phenolic compounds, organic acids, and free amino acids. Subsequently, principal component analysis (PCA) is carried out to characterize this fruit. The main purposes of this study were (i) the clarification of the interactions among three factors-quince fruit part, geographical origin of the fruits, and harvesting year-and the phenolic, organic acid, and free amino acid profiles; (ii) the classification of the possible differences; and (iii) the possible correlation among the contents of phenolics, organic acids, and free amino acids in quince fruit. With these aims, quince pulp and peel from nine geographical origins of Portugal, harvested in three consecutive years, for a total of 48 samples, were studied. PCA was performed to assess the relationship among the different components of quince fruit phenolics, organic acids, and free amino acids. Phenolics determination was the most interesting. The difference between pulp and peel phenolic profiles was more apparent during PCA. Two PCs accounted for 81.29% of the total variability, PC1 (74.14%) and PC2 (7.15%). PC1 described the difference between the contents of caffeoylquinic acids (3-O-, 4-O-, and 5-O-caffeoylquinic acids and 3,5-O-dicaffeoylquinic acid) and flavonoids (quercetin 3-galactoside, rutin, kaempferol glycoside, kaempferol 3-glucoside, kaempferol 3-rutinoside, quercetin glycosides acylated with p-coumaric acid, and kaempferol glycosides acylated with p-coumaric acid). PC2 related the content of 4-O-caffeoylquinic acid with the contents of 5-O-caffeoylquinic and 3,5-O-dicaffeoylquinic acids. PCA of phenolic compounds enables a clear distinction between the two parts of the fruit. The data presented herein may serve as a database for the detection of adulteration in quince derivatives. PMID:15631517

  17. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  18. The Use of Exploratory Factor Analysis and Principal Components Analysis in Communication Research.

    ERIC Educational Resources Information Center

    Park, Hee Sun; Dailey, Rene; Lemus, Daisy

    2002-01-01

    Discusses the distinct purposes of principal components analysis (PCA) and exploratory factor analysis (EFA), using two data sets as examples. Reviews the use of each technique in three major communication journals: "Communication Monographs,""Human Communication Research," and "Communication Research." Finds that the use of EFA and PCA indicates…

  19. Gas chromatography/mass spectrometry based component profiling and quality prediction for Japanese sake.

    PubMed

    Mimura, Natsuki; Isogai, Atsuko; Iwashita, Kazuhiro; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-10-01

    Sake is a Japanese traditional alcoholic beverage, which is produced by simultaneous saccharification and alcohol fermentation of polished and steamed rice by Aspergillus oryzae and Saccharomyces cerevisiae. About 300 compounds have been identified in sake, and the contribution of individual components to the sake flavor has been examined at the same time. However, only a few compounds could explain the characteristics alone and most of the attributes still remain unclear. The purpose of this study was to examine the relationship between the component profile and the attributes of sake. Gas chromatography coupled with mass spectrometry (GC/MS)-based non-targeted analysis was employed to obtain the low molecular weight component profile of Japanese sake including both nonvolatile and volatile compounds. Sake attributes and overall quality were assessed by analytical descriptive sensory test and the prediction model of the sensory score from the component profile was constructed by means of orthogonal projections to latent structures (OPLS) regression analysis. Our results showed that 12 sake attributes [ginjo-ka (aroma of premium ginjo sake), grassy/aldehydic odor, sweet aroma/caramel/burnt odor, sulfury odor, sour taste, umami, bitter taste, body, amakara (dryness), aftertaste, pungent/smoothness and appearance] and overall quality were accurately explained by component profiles. In addition, we were able to select statistically significant components according to variable importance on projection (VIP). Our methodology clarified the correlation between sake attribute and 200 low molecular components and presented the importance of each component thus, providing new insights to the flavor study of sake. PMID:25060729

  20. Component Analysis versus Common Factor Analysis: Some issues in Selecting an Appropriate Procedure.

    PubMed

    Velicer, W F; Jackson, D N

    1990-01-01

    Should one do a component analysis or a factor analysis? The choice is not obvious, because the two broad classes of procedures serve a similar purpose, and share many important mathematical characteristics. Despite many textbooks describing common factor analysis as the preferred procedure, principal component analysis has been the most widely applied. Here we summarize relevant information for the prospective factor/component analyst. First, we discuss the key algebraic similarities and differences. Next, we analyze a number of theoretical and practical issues. The more practical aspects include: the degree of numeric similarity between solutions from the two methods, some common rules for the number of factors to be retained, effects resulting from overextraction, problems with improper solutions, and comparisons in computational efficiency. Finally, we review some broader theoretical issues: the factor indeterminacy issue, the differences between exploratory and confirmatory procedures, and the issue of latent versus manifest variables. PMID:26741964

  1. Illuminating the theoretical components of alexithymia using bifactor modeling and network analysis.

    PubMed

    Watters, Carolyn A; Taylor, Graeme J; Bagby, R Michael

    2016-06-01

    Alexithymia is a multifaceted personality construct that reflects deficits in affect awareness (difficulty identifying feelings, DIF; difficulty describing feelings, DDF) and operative thinking (externally oriented thinking, EOT; restricted imaginal processes, IMP), and is associated with several common psychiatric disorders. Over the years, researchers have debated the components that comprise the construct with some suggesting that IMP and EOT may reflect constructs somewhat distinct from alexithymia. In this investigation, we attempt to clarify the components and their interrelationships using a large heterogeneous multilanguage sample (N = 839), and an interview-based assessment of alexithymia (Toronto Structured Interview for Alexithymia; TSIA). To this end, we used 2 distinctly different but complementary methods, bifactor modeling and network analysis. Results of the confirmatory bifactor model and related reliability estimates supported a strong general factor of alexithymia; however, the majority of reliable variance for IMP was independent of this general factor. In contrast, network analysis results were based on a network comprised of only substantive partial correlations among TSIA items. Modularity analysis revealed 3 communities of items, where DIF and DDF formed 1 community, and EOT and IMP formed separate communities. Network metrics supported that the majority of central items resided in the DIF/DDF community and that IMP items were connected to the network primarily through EOT. Taken together, results suggest that IMP, at least as measured by the TSIA, may not be as salient a component of the alexithymia construct as are the DIF, DDF, and EOT components. (PsycINFO Database Record PMID:26168310

  2. Films and components from holographic recording based on bacteriorhodopsin

    NASA Astrophysics Data System (ADS)

    Hampp, Norbert A.; Sanio, Markus; Anderle, Klaus

    2000-03-01

    Since more than ten years films and cubes made from the halobacterial photochromic retinal protein bacteriorhodopsin (BR) are discussed as storage media for short-term and long- term data storage. The efficient photochemistry of BR, the stability towards chemical and thermal degradation, the reversibility and the polarization recording capability of bacteriorhodopsin films are attractive. The limited storage time of the recorded information implies some restrictions in the use of this material. Because bacteriorhodopsin returns also through a thermal pathway to its initial state recorded information decays with a characteristic time constant which is related to the lifetime of the M-state of the material. By genetic methods and by suitable film compositions this value can be extended up to several minutes which is more than enough for all real-time applications. In some cases a longer storage time is desired, among them optical data storage. Optical modules and components based on bacteriorhodopsin films, which can be thermostated to different temperatures, are presented. They allow very sensitive optical recording and can be photochemically or thermally erased. These bacteriorhodopsin containing modules may be used for high resolution optical recording with extended storage time.

  3. Internet MEMS design tools based on component technology

    NASA Astrophysics Data System (ADS)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  4. The Effectiveness of Blind Source Separation Using Independent Component Analysis for GNSS Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Yan, Jun; Dong, Danan; Chen, Wen

    2016-04-01

    Due to the development of GNSS technology and the improvement of its positioning accuracy, observational data obtained by GNSS is widely used in Earth space geodesy and geodynamics research. Whereas the GNSS time series data of observation stations contains a plenty of information. This includes geographical space changes, deformation of the Earth, the migration of subsurface material, instantaneous deformation of the Earth, weak deformation and other blind signals. In order to decompose some instantaneous deformation underground, weak deformation and other blind signals hided in GNSS time series, we apply Independent Component Analysis (ICA) to daily station coordinate time series of the Southern California Integrated GPS Network. As ICA is based on the statistical characteristics of the observed signal. It uses non-Gaussian and independence character to process time series to obtain the source signal of the basic geophysical events. In term of the post-processing procedure of precise time-series data by GNSS, this paper examines GNSS time series using the principal component analysis (PCA) module of QOCA and ICA algorithm to separate the source signal. Then we focus on taking into account of these two signal separation technologies, PCA and ICA, for separating original signal that related geophysical disturbance changes from the observed signals. After analyzing these separation process approaches, we demonstrate that in the case of multiple factors, PCA exists ambiguity in the separation of source signals, that is the result related is not clear, and ICA will be better than PCA, which means that dealing with GNSS time series that the combination of signal source is unknown is suitable to use ICA.

  5. 77 FR 69509 - Combining Modal Responses and Spatial Components in Seismic Response Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-19

    ... COMMISSION Combining Modal Responses and Spatial Components in Seismic Response Analysis AGENCY: Nuclear... Components in Seismic Response Analysis'' as an administratively changed guide in which there are minor... response analysis of nuclear power plant structures, systems, and components that are important to...

  6. Digital photogrammetry for quantitative wear analysis of retrieved TKA components.

    PubMed

    Grochowsky, J C; Alaways, L W; Siskey, R; Most, E; Kurtz, S M

    2006-11-01

    The use of new materials in knee arthroplasty demands a way in which to accurately quantify wear in retrieved components. Methods such as damage scoring, coordinate measurement, and in vivo wear analysis have been used in the past. The limitations in these methods illustrate a need for a different methodology that can accurately quantify wear, which is relatively easy to perform and uses a minimal amount of expensive equipment. Off-the-shelf digital photogrammetry represents a potentially quick and easy alternative to what is readily available. Eighty tibial inserts were visually examined for front and backside wear and digitally photographed in the presence of two calibrated reference fields. All images were segmented (via manual and automated algorithms) using Adobe Photoshop and National Institute of Health ImageJ. Finally, wear was determined using ImageJ and Rhinoceros software. The absolute accuracy of the method and repeatability/reproducibility by different observers were measured in order to determine the uncertainty of wear measurements. To determine if variation in wear measurements was due to implant design, 35 implants of the three most prevalent designs were subjected to retrieval analysis. The overall accuracy of area measurements was 97.8%. The error in automated segmentation was found to be significantly lower than that of manual segmentation. The photogrammetry method was found to be reasonably accurate and repeatable in measuring 2-D areas and applicable to determining wear. There was no significant variation in uncertainty detected among different implant designs. Photogrammetry has a broad range of applicability since it is size- and design-independent. A minimal amount of off-the-shelf equipment is needed for the procedure and no proprietary knowledge of the implant is needed. PMID:16649169

  7. Components for automated microfluidics sample preparation and analysis

    NASA Astrophysics Data System (ADS)

    Archer, M.; Erickson, J. S.; Hilliard, L. R.; Howell, P. B., Jr.; Stenger, D. A.; Ligler, F. S.; Lin, B.

    2008-02-01

    The increasing demand for portable devices to detect and identify pathogens represents an interdisciplinary effort between engineering, materials science, and molecular biology. Automation of both sample preparation and analysis is critical for performing multiplexed analyses on real world samples. This paper selects two possible components for such automated portable analyzers: modified silicon structures for use in the isolation of nucleic acids and a sheath flow system suitable for automated microflow cytometry. Any detection platform that relies on the genetic content (RNA and DNA) present in complex matrices requires careful extraction and isolation of the nucleic acids in order to ensure their integrity throughout the process. This sample pre-treatment step is commonly performed using commercially available solid phases along with various molecular biology techniques that require multiple manual steps and dedicated laboratory space. Regardless of the detection scheme, a major challenge in the integration of total analysis systems is the development of platforms compatible with current isolation techniques that will ensure the same quality of nucleic acids. Silicon is an ideal candidate for solid phase separations since it can be tailored structurally and chemically to mimic the conditions used in the laboratory. For analytical purposes, we have developed passive structures that can be used to fully ensheath one flow stream with another. As opposed to traditional flow focusing methods, our sheath flow profile is truly two dimensional, making it an ideal candidate for integration into a microfluidic flow cytometer. Such a microflow cytometer could be used to measure targets captured on either antibody- or DNA-coated beads.

  8. Fully automated diabetic retinopathy screening using morphological component analysis.

    PubMed

    Imani, Elaheh; Pourreza, Hamid-Reza; Banaee, Touka

    2015-07-01

    Diabetic retinopathy is the major cause of blindness in the world. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This diagnosis can be made through regular screening and timely treatment. Besides, automation of this process can significantly reduce the work of ophthalmologists and alleviate inter and intra observer variability. This paper provides a fully automated diabetic retinopathy screening system with the ability of retinal image quality assessment. The novelty of the proposed method lies in the use of Morphological Component Analysis (MCA) algorithm to discriminate between normal and pathological retinal structures. To this end, first a pre-screening algorithm is used to assess the quality of retinal images. If the quality of the image is not satisfactory, it is examined by an ophthalmologist and must be recaptured if necessary. Otherwise, the image is processed for diabetic retinopathy detection. In this stage, normal and pathological structures of the retinal image are separated by MCA algorithm. Finally, the normal and abnormal retinal images are distinguished by statistical features of the retinal lesions. Our proposed system achieved 92.01% sensitivity and 95.45% specificity on the Messidor dataset which is a remarkable result in comparison with previous work. PMID:25863517

  9. PRINCIPAL COMPONENT ANALYSIS OF SLOAN DIGITAL SKY SURVEY STELLAR SPECTRA

    SciTech Connect

    McGurk, Rosalie C.; Kimball, Amy E.; Ivezic, Zeljko

    2010-03-15

    We apply Principal Component Analysis (PCA) to {approx}100,000 stellar spectra obtained by the Sloan Digital Sky Survey (SDSS). In order to avoid strong nonlinear variation of spectra with effective temperature, the sample is binned into 0.02 mag wide intervals of the g - r color (-0.20 < g - r < 0.90, roughly corresponding to MK spectral types A3-K3), and PCA is applied independently for each bin. In each color bin, the first four eigenspectra are sufficient to describe the observed spectra within the measurement noise. We discuss correlations of eigencoefficients with metallicity and gravity estimated by the Sloan Extension for Galactic Understanding and Exploration Stellar Parameters Pipeline. The resulting high signal-to-noise mean spectra and the other three eigenspectra are made publicly available. These data can be used to generate high-quality spectra for an arbitrary combination of effective temperature, metallicity, and gravity within the parameter space probed by the SDSS. The SDSS stellar spectroscopic database and the PCA results presented here offer a convenient method to classify new spectra, to search for unusual spectra, to train various spectral classification methods, and to synthesize accurate colors in arbitrary optical bandpasses.

  10. Dissection of the Hormetic Curve: Analysis of Components and Mechanisms

    PubMed Central

    Lushchak, Volodymyr I.

    2014-01-01

    The relationship between the dose of an effector and the biological response frequently is not described by a linear function and, moreover, in some cases the dose-response relationship may change from positive/adverse to adverse/positive with increasing dose. This complicated relationship is called “hormesis”. This paper provides a short analysis of the concept along with a description of used approaches to characterize hormetic relationships. The whole hormetic curve can be divided into three zones: I – a lag-zone where no changes are observed with increasing dose; II – a zone where beneficial/adverse effects are observed, and III – a zone where the effects are opposite to those seen in zone II. Some approaches are proposed to analyze the molecular components involved in the development of the hormetic character of dose-response relationships with the use of specific genetic lines or inhibitors of regulatory pathways. The discussion is then extended to suggest a new parameter (half-width of the hormetic curve at zone II) for quantitative characterization of the hormetic curve. The problems limiting progress in the development of the hormesis concept such as low reproducibility and predictability may be solved, at least partly, by deciphering the molecular mechanisms underlying the hormetic dose-effect relationship. PMID:25249836

  11. Multi-class stain separation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Trahearn, Nicholas; Snead, David; Cree, Ian; Rajpoot, Nasir

    2015-03-01

    Stain separation is the process whereby a full colour histology section image is transformed into a series of single channel images, each corresponding to a given stain's expression. Many algorithms in the field of digital pathology are concerned with the expression of a single stain, thus stain separation is a key preprocessing step in these situations. We present a new versatile method of stain separation. The method uses Independent Component Analysis (ICA) to determine a set of statistically independent vectors, corresponding to the individual stain expressions. In comparison to other popular approaches, such as PCA and NNMF, we found that ICA gives a superior projection of the data with respect to each stain. In addition, we introduce a correction step to improve the initial results provided by the ICA coefficients. Many existing approaches only consider separation of two stains, with primary emphasis on Haematoxylin and Eosin. We show that our method is capable of making a good separation when there are more than two stains present. We also demonstrate our method's ability to achieve good separation on a variety of different stain types.

  12. Assessment of models for pedestrian dynamics with functional principal component analysis

    NASA Astrophysics Data System (ADS)

    Chraibi, Mohcine; Ensslen, Tim; Gottschalk, Hanno; Saadi, Mohamed; Seyfried, Armin

    2016-06-01

    Many agent based simulation approaches have been proposed for pedestrian flow. As such models are applied e.g. in evacuation studies, the quality and reliability of such models is of vital interest. Pedestrian trajectories are functional data and thus functional principal component analysis is a natural tool to assess the quality of pedestrian flow models beyond average properties. In this article we conduct functional Principal Component Analysis (PCA) for the trajectories of pedestrians passing through a bottleneck. In this way it is possible to assess the quality of the models not only on basis of average values but also by considering its fluctuations. We benchmark two agent based models of pedestrian flow against the experimental data using PCA average and stochastic features. Functional PCA proves to be an efficient tool to detect deviation between simulation and experiment and to assess quality of pedestrian models.

  13. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. PMID:27227487

  14. Homogenization of soil properties map by Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Valverde Arias, Omar; Garrido, Alberto; Villeta, Maria; Tarquis, Ana Maria

    2016-04-01

    It is widely known that extreme climatic phenomena occur with more intensity and frequency. This fact has put more pressure over farming, becoming very important to implement agriculture risk management policies by governments and institutions. One of the main strategies is transfer risk by agriculture insurance. Agriculture insurance based in indexes has gained importance in the last decade. And consist in a comparison between measured index values with a defined threshold that triggers damage losses. However, based index insurance could not be based on an isolated measurement. It is necessary to be integrated in a complete monitoring system that uses many sources of information and tools. For example, index influence areas, crop production risk maps, crop yields, claim statistics, and so on. To establish index influence area is necessary to have a secondary information that show us homogeneous climatic and soil areas, which inside of each homogeneous classes, index measurements on crops of interest are going to be similar, and in this way reduce basis risk. But it is necessary an efficient method to accomplish this aim, to get homogeneous areas that not depends on only in expert criteria and that could be widely used, for this reason this study asses two conventional agricultural and geographic methods (control and climatic maps) based in expert criteria, and one classical statistical method of multi-factorial analysis (factorial map), all of them to homogenize soil and climatic characteristics. Resulting maps were validated by agricultural and spatial analysis, obtaining very good results in statistical method (Factorial map) that proves to be an efficient and accuracy method that could be used for similar porpoises.

  15. Demasking the integrated value of discharge - Advanced sensitivity analysis on the components of hydrological models

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin

    2016-04-01

    The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal

  16. Outlier analysis and principal component analysis to detect fatigue cracks in waveguides

    NASA Astrophysics Data System (ADS)

    Rizzo, Piervincenzo; Cammarata, Marcello; Dutta, Debaditya; Sohn, Hoon

    2009-03-01

    Ultrasonic Guided Waves (UGWs) are a useful tool in structural health monitoring (SHM) applications that can benefit from built-in transduction, moderately large inspection ranges and high sensitivity to small flaws. This paper describes a SHM method based on UGWs, discrete wavelet transform (DWT), outlier analysis and principal component analysis (PCA) able to detect and quantify the onset and propagation of fatigue cracks in structural waveguides. The method combines the advantages of guided wave signals processed through the DWT with the outcomes of selecting defectsensitive features to perform a multivariate diagnosis of damage. The framework presented in this paper is applied to the detection of fatigue cracks in a steel beam. The probing hardware consists of a PXI platform that controls the generation and measurement of the ultrasonic signals by means of piezoelectric transducers made of Lead Zirconate Titanate. Although the approach is demonstrated in a beam test, it is argued that the proposed method is general and applicable to any structure that can sustain the propagation of UGWs.

  17. Fatigue detection in strength training using three-dimensional accelerometry and principal component analysis.

    PubMed

    Brown, Niklas; Bichler, Sebastian; Fiedler, Meike; Alt, Wilfried

    2016-06-01

    Detection of neuro-muscular fatigue in strength training is difficult, due to missing criterion measures and the complexity of fatigue. Thus, a variety of methods are used to determine fatigue. The aim of this study was to use a principal component analysis (PCA) on a multifactorial data-set based on kinematic measurements to determine fatigue. Twenty participants (strength training experienced, 60% male) executed 3 sets of 3 exercises with 50 (12 repetitions), 75 (12 repetitions) and 100%-12 RM (RM). Data were collected with a 3D accelerometer and analysed by a newly developed algorithm to evaluate parameters for each repetition. A PCA with six variables was carried out on the results. A fatigue factor was computed based on the loadings on the first component. One-way ANOVA with Bonferroni post hoc analysis was calculated to test for differences between the intensity levels. All six input variables had high loadings on the first component. The ANOVA showed a significant difference between intensities (p < 0.001). Post-hoc analysis revealed a difference between 100% and the lower intensities (p < 0.05) and no difference between 50 and 75%-12RM. Based on these results, it is possible to distinguish between fatigued and non-fatigued sets of strength training. PMID:27111008

  18. Hybrid integrated photonic components based on a polymer platform

    NASA Astrophysics Data System (ADS)

    Eldada, Louay A.

    2003-06-01

    We report on a polymer-on-silicon optical bench platform that enables the hybrid integration of elemental passive and active optical functions. Planar polymer circuits are produced photolithographically, and slots are formed in them for the insertion of chips and films of a variety of materials. The polymer circuits provide interconnects, static routing elements such as couplers, taps, and multi/demultiplexers, as well as thermo-optically dynamic elements such as switches, variable optical attenuators, and tunable notch filters. Crystal-ion-sliced thin films of lithium niobate are inserted in the polymer circuit for polarization control or for electro-optic modulation. Films of yttrium iron garnet and neodymium iron boron magnets are inserted in order to magneto-optically achieve non-reciprocal operation for isolation and circulation. Indium phosphide and gallium arsenide chips are inserted for light generation, amplification, and detection, as well as wavelength conversion. The functions enabled by this multi-material platform span the range of the building blocks needed in optical circuits, while using the highest-performance material system for each function. We demonstrated complex-functionality photonic components based on this technology, including a metro ring node module and a tunable optical transmitter. The metro ring node chip includes switches, variable optical attenuators, taps, and detectors; it enables optical add/drop multiplexing, power monitoring, and automatic load balancing, and it supports shared and dedicated protection protocols in two-fiber metro ring optical networks. The tunable optical transmitter chip includes a tunable external cavity laser, an isolator, and a high-speed modulator.

  19. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to access the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to...

  20. ENVIRONMENTAL ANALYSIS OF GASOLINE BLENDING COMPONENTS THROUGH THEIR LIFE CYCLE

    EPA Science Inventory

    The purpose of this study is to assess the contribution of the three major gasoline blending components to the potential environmental impacts (PEI), which are the reformate, alkylate and cracked gasoline. This study accounts for losses of the gasoline blending components due to ...

  1. A Component Analysis of Positive Behaviour Support Plans

    ERIC Educational Resources Information Center

    McClean, Brian; Grey, Ian

    2012-01-01

    Background: Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Method: Sixty-one staff working with individuals with intellectual disability…

  2. Transcriptome analysis of all two-component regulatory system mutants of Escherichia coli K-12.

    PubMed

    Oshima, Taku; Aiba, Hirofumi; Masuda, Yasushi; Kanaya, Shigehiko; Sugiura, Masahito; Wanner, Barry L; Mori, Hirotada; Mizuno, Takeshi

    2002-10-01

    We have systematically examined the mRNA profiles of 36 two-component deletion mutants, which include all two-component regulatory systems of Escherichia coli, under a single growth condition. DNA microarray results revealed that the mutants belong to one of three groups based on their gene expression profiles in Luria-Bertani broth under aerobic conditions: (i) those with no or little change; (ii) those with significant changes; and (iii) those with drastic changes. Under these conditions, the anaeroresponsive ArcB/ArcA system, the osmoresponsive EnvZ/OmpR system and the response regulator UvrY showed the most drastic changes. Cellular functions such as flagellar synthesis and expression of the RpoS regulon were affected by multiple two-component systems. A high correlation coefficient of expression profile was found between several two-component mutants. Together, these results support the view that a network of functional interactions, such as cross-regulation, exists between different two-component systems. The compiled data are avail-able at our website (http://ecoli.aist-nara.ac.jp/xp_analysis/ 2_components). PMID:12366850

  3. Inverse spatial principal component analysis for geophysical survey data interpolation

    NASA Astrophysics Data System (ADS)

    Li, Qingmou; Dehler, Sonya A.

    2015-04-01

    The starting point for data processing, visualization, and overlay with other data sources in geological applications often involves building a regular grid by interpolation of geophysical measurements. Typically, the sampling interval along survey lines is much higher than the spacing between survey lines because the geophysical recording system is able to operate with a high sampling rate, while the costs and slower speeds associated with operational platforms limit line spacing. However, currently available interpolating methods often smooth data observed with higher sampling rate along a survey line to accommodate the lower spacing across lines, and much of the higher resolution information is not captured in the interpolation process. In this approach, a method termed as the inverse spatial principal component analysis (isPCA) is developed to address this problem. In the isPCA method, a whole profile observation as well as its line position is handled as an entity and a survey collection of line entities is analyzed for interpolation. To test its performance, the developed isPCA method is used to process a simulated airborne magnetic survey from an existing magnetic grid offshore the Atlantic coast of Canada. The interpolation results using the isPCA method and other methods are compared with the original survey grid. It is demonstrated that the isPCA method outperforms the Inverse Distance Weighting (IDW), Kriging (Geostatistical), and MINimum Curvature (MINC) interpolation methods in retaining detailed anomaly structures and restoring original values. In a second test, a high resolution magnetic survey offshore Cape Breton, Nova Scotia, Canada, was processed and the results are compared with other geological information. This example demonstrates the effective performance of the isPCA method in basin structure identification.

  4. Application of the component paradigm for analysis and design of advanced health system architectures.

    PubMed

    Blobel, B

    2000-12-01

    Based on the component paradigm for software engineering as well as on a consideration of common middleware approaches for health information systems, a generic component model has been developed supporting analysis, design, implementation and harmonisation of such complex systems. Using methods like abstract automatons and the Unified Modelling Language (UML), it could be shown that such components enable the modelling of real-world systems at different levels of abstractions and granularity, so reflecting different views on the same system in a generic and consistent way. Therefore, not only programs and technologies could be modelled, but also business processes, organisational frameworks or security issues as done successfully within the framework of several European projects. PMID:11137472

  5. Errors analysis of dimensional metrology for internal components assembly of EAST

    NASA Astrophysics Data System (ADS)

    Gu, Yongqi; Liu, Chen; Xi, Weibin; Lu, Kun; Wei, Jing; Song, Yuntao; Yu, Liandong; Ge, Jian; Zheng, Yuanyang; Zhao, Huining; Zheng, Fubin; Wang, Jun

    2016-01-01

    The precision of dimensional measurement plays an important role in guaranteeing the assembly accuracy of its internal components during the upgrading phase of EAST device. In this paper, the experimental research and analysis were done based on three dimensional combined measurement systems, combining Laser Tracker, flexible Measure ARM and measurement fiducials network, which are used for alignment and measurement of EAST components during the assembly process. The error sources were analyzed, e.g. temperature, gravity, welding, and so on. And the effective weight of each kind of error source was estimated by the simulation method. Then these results were used to correct and compensate the actual measured data, the stability and consistency of the measurement results was greatly improved in different measurement process, and the assembly precision of the EAST components was promised.

  6. A new acceleration switch based on separated mass component and elastic component

    NASA Astrophysics Data System (ADS)

    Wu, Liping; Hu, Jun; Yang, Bo; Shao, Qing; Peng, Gang

    2010-10-01

    This paper presents a new linear inertial acceleration switch which senses inertial acceleration and gives a signal of switchpoint. It is an entire mechanical device has two particular characters: a simple structure and an environmental interference-free capability. The structure and work principle of the switch is introduced, then the design process to the spring is analyzed and simulated, and finally the rationality of this acceleration switch's design is given according to the sample's testing data. In this acceleration switch, the elastic component is a leaf spring, and the mass component is a standard steel ball. The spring and the ball are separated instead of rigidly connected, which make the whole structure is simple. When the switch is on the work direction the ball and the spring are interact, and the spring is on work; when the switch isn't on the work direction, the ball and the spring are separated; environmental external force is on the mass instead of on the spring. The spring is insusceptible on this condition. This particularity determines that the switch is highly environmental interference-free, and doesn't easily affected by environmental influence. Some parameters of the inertial switch are given as followings: (1) Overall dimension of the inertial switch is about 28mm×12mm×12mm (2) systemic precision of the inertial switch is 1.5%; (3) the inertial switch can endure 0.2g2/Hz stochastic vibration. It is suggested that this inertial switch can be applied in high consequence system.

  7. Electromagnetic crystal based terahertz thermal radiators and components

    NASA Astrophysics Data System (ADS)

    Wu, Ziran

    prototyping approach. Third, an all-dielectric THz waveguide is designed, fabricated and characterized. The design is based on hollow-core EMXT waveguide, and the fabrication is implemented with the THz prototyping method. Characterization results of the waveguide power loss factor show good consistency with the simulation, and waveguide propagation loss as low as 0.03 dB/mm at 105 GHz is demonstrated. Several design parameters are also varied and their impacts on the waveguide performance investigated theoretically. Finally, a THz EMXT antenna based on expanding the defect radius of the EMXT waveguide to a horn shape is proposed and studied. The boresight directivity and main beam angular width of the optimized EMXT horn antenna is comparable with a copper horn antenna of the same dimensions at low frequencies, and much better than the copper horn at high frequencies. The EMXT antenna has been successfully fabricated via the same THz prototyping, and we believe this is the first time an EMXT antenna of this architecture is fabricated. Far-field measurement of the EMXT antenna radiation pattern is undergoing. Also, in order to integrate planar THz solid-state devices (especially source and detector) and THz samples under test with the potential THz micro-system fabricate-able by the prototyping approach, an EMXT waveguide-to-microstrip line transition structure is designed. The structure uses tapered solid dielectric waveguides on both ends to transit THz energy from the EMXT waveguide defect onto the microstrip line. Simulation of the transition structure in a back-to-back configuration yields about -15 dB insertion loss mainly due to the dielectric material loss. The coupling and radiation loss of the transition structure is estimated to be -2.115 dB. The fabrication and characterization of the transition system is currently underway. With all the above THz components realized in the future, integrated THz micro-systems manufactured by the same prototyping technique will be

  8. Spatiotemporal analysis of GPS time series in vertical direction using independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Dai, Wujiao; Peng, Wei; Meng, Xiaolin

    2015-11-01

    GPS has been widely used in the field of geodesy and geodynamics thanks to its technology development and the improvement of positioning accuracy. A time series observed by GPS in vertical direction usually contains tectonic signals, non-tectonic signals, residual atmospheric delay, measurement noise, etc. Analyzing these information is the basis of crustal deformation research. Furthermore, analyzing the GPS time series and extracting the non-tectonic information are helpful to study the effect of various geophysical events. Principal component analysis (PCA) is an effective tool for spatiotemporal filtering and GPS time series analysis. But as it is unable to extract statistically independent components, PCA is unfavorable for achieving the implicit information in time series. Independent component analysis (ICA) is a statistical method of blind source separation (BSS) and can separate original signals from mixed observations. In this paper, ICA is used as a spatiotemporal filtering method to analyze the spatial and temporal features of vertical GPS coordinate time series in the UK and Sichuan-Yunnan region in China. Meanwhile, the contributions from atmospheric and soil moisture mass loading are evaluated. The analysis of the relevance between the independent components and mass loading with their spatial distribution shows that the signals extracted by ICA have a strong correlation with the non-tectonic deformation, indicating that ICA has a better performance in spatiotemporal analysis.

  9. Data-Parallel Mesh Connected Components Labeling and Analysis

    SciTech Connect

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  10. Hyperspectral image compression and target detection using nonlinear principal component analysis

    NASA Astrophysics Data System (ADS)

    Du, Qian; Wei, Wei; Ma, Ben; Younan, Nicolas H.

    2013-09-01

    The widely used principal component analysis (PCA) is implemented in nonlinear by an auto-associative neural network. Compared to other nonlinear versions, such as kernel PCA, such a nonlinear PCA has explicit encoding and decoding processes, and the data can be transformed back to the original space. Its data compression performance is similar to that of PCA, but data analysis performance such as target detection is much better. To expedite its training process, graphics computing unit (GPU)-based parallel computing is applied.

  11. Alternating Least Squares Algorithms for Simultaneous Components Analysis with Equal Component Weight Matrices in Two or More Populations.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; ten Berge, Jos M. F.

    1989-01-01

    Two alternating least squares algorithms are presented for the simultaneous components analysis method of R. E. Millsap and W. Meredith (1988). These methods, one for small data sets and one for large data sets, can indicate whether or not a global optimum for the problem has been attained. (SLD)

  12. Estimation and Psychometric Analysis of Component Profile Scores via Multivariate Generalizability Theory

    ERIC Educational Resources Information Center

    Grochowalski, Joseph H.

    2015-01-01

    Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…

  13. Application of reliability analysis method to fusion component testing

    SciTech Connect

    Ying, A.Y.; Abdou, M.A.

    1994-12-31

    The term reliability here implies that a component satisfies a set of performance criteria while under specified conditions of use over a specified period of time. For fusion nuclear technology, the reliability goal to be pursued is the development of a mean time between failures (MTBF) for a component which is longer than its lifetime goal. While the component lifetime is mainly determined by the fluence limitation (i.e., damage level) which leads to performance degradation or failure, the MTBF represents an arithmetic average life of all units in a population. One method of assessing the reliability goal involves determining component availability needs to meet the goal plant availability, defining a test-analyze-fix development program to improve component reliability, and quantifying both test times and the number of test articles that would be required to ensure that a specified target MTBF is met. Statistically, constant failure rates and exponential life distributions are assumed for analyses and blanket component development is used as an example. However, as data are collected the probability distribution of the parameter of interest can be updated in a Bayesian fashion. The nuclear component testing program will be structured such that reliability requirements for DEMO can be achieved. The program shall not exclude the practice of a good design (such as reducing the complexity of the system to the minimum essential for the required operation), the execution of high quality manufacturing and inspection processes, and the implication of quality assurance and control for component development. In fact, the assurance of a high quality testing/development program is essential so that there is no question left for reliability.

  14. Joint Procrustes Analysis for Simultaneous Nonsingular Transformation of Component Score and Loading Matrices

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2009-01-01

    In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…

  15. Microroughness analysis of thin film components for VUV applications

    NASA Astrophysics Data System (ADS)

    Ferre-Borrull, Josep; Duparre, Angela; Steinert, Joerg; Ristau, Detlev; Quesnel, Etienne

    2000-11-01

    For the roughness characterization of optical surfaces a new procedure based on the analysis of their power spectral density (PSD) functions has been developed. The method consists of the fitting of the PSD obtained from Atomic Force Microscopy measurements at different scan sizes to mathematical models. With this procedure the micro- structural properties of optical surfaces and coatings can be represented by a reduced set of numbers that correspond to the characteristic parameters of the mathematical models. For optical coatings this method allows a separate study of the influence of the substrate and layers on the overall sample roughness. As an example, the method is applied to MgF2 and LaF3 films for VUV applications. We investigated a set of single layers deposited onto superpolished Caf2, fused silica and Si substrates. The samples were deposited by ion beam sputtering, boat and e- beam evaporation. A comparison of the influence of the substrate on the development of the roughnesses and lateral structures has been performed, as well as a study of the dependence of the roughness properties of the coatings on the deposition process. Complementary investigations of roughness-related scattering consisting of measurements of Total Scatter at 193 nm and 633 nm and calculation of expected scattering based on the theory are presented.

  16. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  17. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  18. Technological Alternatives to Paper-Based Components of Team-Based Learning

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Walker, Joshua D.

    2008-01-01

    The authors have been using components of team-based learning (TBL) in two undergraduate courses at the University of Texas for several years: an educational psychology survey course--Cognition, Human Learning and Motivation--and Introduction to Statistics. In this chapter, they describe how they used technology in classes of fifty to seventy…

  19. Component Architectures and Web-Based Learning Environments

    ERIC Educational Resources Information Center

    Ferdig, Richard E.; Mishra, Punya; Zhao, Yong

    2004-01-01

    The Web has caught the attention of many educators as an efficient communication medium and content delivery system. But we feel there is another aspect of the Web that has not been given the attention it deserves. We call this aspect of the Web its "component architecture." Briefly it means that on the Web one can develop very complex…

  20. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    PubMed

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method. PMID:25571123

  1. A method for the detection of alcohol vapours based on optical sensing of magnesium 5,10,15,20-tetraphenyl porphyrin thin film by an optical spectrometer and principal component analysis.

    PubMed

    Kladsomboon, Sumana; Kerdcharoen, Teerakiat

    2012-12-13

    In this work we have proposed a method for the detection of alcohol vapours, i.e. methanol, ethanol and isopropanol, based on the optical sensing response of magnesium 5,10,15,20-tetraphenyl porphyrin (MgTPP) thin films, as measured by optical spectrometry with the assistance of chemometric analysis. We have implemented a scheme which allows a laboratory UV-vis spectrometer to act as a so-called "electronic nose" with very little modification. MgTPP thin films were prepared by a spin coating technique, using chloroform as the solvent, and then subjected to thermal annealing at 280°C in an argon atmosphere. These MgTPP optical gas sensors presented significant responses with methanol compared to ethanol and isopropanol, based on the dynamic flow of alcohol vapours at the same mol% of alcohol concentration. Density functional theory (DFT) calculations were performed to model the underlying mechanism of this selectivity. The performance of the optical gas sensors was optimised by varying the fabrication parameters. It is hoped that the MgTPP thin film together with an off-the-shelf optical spectrometer and a simple chemometrics algorithm can be a valuable tool for the analysis of alcoholic content in the beverage industry. PMID:23206399

  2. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy.

    PubMed

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-30

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features. PMID:27071414

  3. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  4. A Content Analysis of Preconception Health Education Materials: Characteristics, Strategies, and Clinical-Behavioral Components

    PubMed Central

    Levis, Denise M.; Westbrook, Kyresa

    2015-01-01

    Purpose Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. Design We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Setting Not applicable. Participants Not applicable. Method Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials’ characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). Results The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). “Practicing PCH benefits the baby’s health” was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Conclusion Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials. PMID:23286661

  5. An Augmented Classical Least Squares Method for Quantitative Raman Spectral Analysis against Component Information Loss

    PubMed Central

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR. PMID:23956689

  6. [Infrared spectroscopy analysis of SF6 using multiscale weighted principal component analysis].

    PubMed

    Peng, Xi; Wang, Xian-Pei; Huang, Yun-Guang

    2012-06-01

    Infrared spectroscopy analysis of SF6 and its derivative is an important method for operating state assessment and fault diagnosis of the gas insulated switchgear (GIS). Traditional methods are complicated and inefficient, and the results can vary with different subjects. In the present work, the feature extraction methods in machine learning are recommended to solve such diagnosis problem, and a multiscale weighted principal component analysis method is proposed. The proposed method combines the advantage of standard principal component analysis and multiscale decomposition to maximize the feature information in different scales, and modifies the importance of the eigenvectors in classification. The classification performance of the proposed method was demonstrated to be 3 to 4 times better than that of the standard PCA for the infrared spectra of SF6 and its derivative provided by Guangxi Research Institute of Electric Power. PMID:22870634

  7. Condition Based Monitoring of Gas Turbine Combustion Components

    SciTech Connect

    Ulerich, Nancy; Kidane, Getnet; Spiegelberg, Christine; Tevs, Nikolai

    2012-09-30

    The objective of this program is to develop sensors that allow condition based monitoring of critical combustion parts of gas turbines. Siemens teamed with innovative, small companies that were developing sensor concepts that could monitor wearing and cracking of hot turbine parts. A magnetic crack monitoring sensor concept developed by JENTEK Sensors, Inc. was evaluated in laboratory tests. Designs for engine application were evaluated. The inability to develop a robust lead wire to transmit the signal long distances resulted in a discontinuation of this concept. An optical wear sensor concept proposed by K Sciences GP, LLC was tested in proof-of concept testing. The sensor concept depended, however, on optical fiber tips wearing with the loaded part. The fiber tip wear resulted in too much optical input variability; the sensor could not provide adequate stability for measurement. Siemens developed an alternative optical wear sensor approach that used a commercial PHILTEC, Inc. optical gap sensor with an optical spacer to remove fibers from the wearing surface. The gap sensor measured the length of the wearing spacer to follow loaded part wear. This optical wear sensor was developed to a Technology Readiness Level (TRL) of 5. It was validated in lab tests and installed on a floating transition seal in an F-Class gas turbine. Laboratory tests indicate that the concept can measure wear on loaded parts at temperatures up to 800{degrees}C with uncertainty of < 0.3 mm. Testing in an F-Class engine installation showed that the optical spacer wore with the wearing part. The electro-optics box located outside the engine enclosure survived the engine enclosure environment. The fiber optic cable and the optical spacer, however, both degraded after about 100 operating hours, impacting the signal analysis.

  8. ANALYSIS OF KEY MPC COMPONENTS MATERIAL REQUIREMENTS (SCPB: N/A)

    SciTech Connect

    D. Stahl

    1996-03-19

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) in response to a request received via a QAP-3-12 Design Input Data Request from Waste Acceptance, Storage & Transportation (WAST) Design (formerly MRS/MPC Design). The request is to provide: Specific material requirements for the various MPC components (shell, basket, closure lids, shield plug, neutron absorber, and flux traps, if used ). The objective of this analysis is to provide the requested requirements. The purpose of this analysis is to provide a documented record of the basis for the requested requirements. The response is stated in Section 8 herein. The analysis is based upon requirements from an MGDS perspective.

  9. Controllable-stiffness components based on magnetorheological elastomers

    NASA Astrophysics Data System (ADS)

    Ginder, John M.; Nichols, Mark E.; Elie, Larry D.; Clark, Seamus M.

    2000-06-01

    So-called magnetorheological (MR) elastomers, comprising rubbery polymers loaded with magnetizable particles that are aligned in a magnetic field, possess dynamic stiffness and damping that can subsequently be controlled by applied fields. Tunable automotive bushings and mounts incorporating these materials and an embedded magnetic field source have been constructed. In this article, the response of these components to dynamic mechanical loading is described. They behave essentially as elastomeric springs with stiffness and damping that is increased by tens of percent with an applied electrical current. Their time of response to a change in current is less than ten milliseconds. In addition to a tunable spring or force generator, these components may also serve as deflection sensors.

  10. A Component-based Programming Model for Composite, Distributed Applications

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  11. Component analysis of productivity in home care RNs.

    PubMed

    Benefield, L E

    1996-08-01

    The purpose of this study was to develop a productivity measurement method applicable to home health care registered nurses (RNs) by identifying and quantifying the areas of knowledge and ability that define productive nursing practice in home health care. A descriptive, correlational design using qualitative and quantitative methods of data collection and analysis identified 35 knowledge and ability variables that grouped into seven dimensions: Client/Family Management, Practice Management, Knowledge/Skill Maintenance, Communication, Nursing Process, Written Documentation, and Home Health Care Knowledge. There were no significant differences in productivity variables among four major types of agencies. Among agencies considered preeminent, intellectual skills appeared to be of greater importance to productive practice than direct-care skills. The seven productivity dimensions that emerged from this study show promise in providing 1) a theoretical basis for understanding the knowledge and abilities associated with RN productivity in the home health setting, 2) a description of nurse inputs in a home health services productivity model, and 3) a reality-based measurement tool that has utility in managing RN productivity in home health care. PMID:8828384

  12. Differentially Variable Component Analysis (dVCA): Identifying Multiple Evoked Components using Trial-to-Trial Variability

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.

    2003-01-01

    Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.

  13. What's hampering measurement invariance: detecting non-invariant items using clusterwise simultaneous component analysis

    PubMed Central

    De Roover, Kim; Timmerman, Marieke E.; De Leersnyder, Jozefien; Mesquita, Batja; Ceulemans, Eva

    2014-01-01

    The issue of measurement invariance is ubiquitous in the behavioral sciences nowadays as more and more studies yield multivariate multigroup data. When measurement invariance cannot be established across groups, this is often due to different loadings on only a few items. Within the multigroup CFA framework, methods have been proposed to trace such non-invariant items, but these methods have some disadvantages in that they require researchers to run a multitude of analyses and in that they imply assumptions that are often questionable. In this paper, we propose an alternative strategy which builds on clusterwise simultaneous component analysis (SCA). Clusterwise SCA, being an exploratory technique, assigns the groups under study to a few clusters based on differences and similarities in the component structure of the items, and thus based on the covariance matrices. Non-invariant items can then be traced by comparing the cluster-specific component loadings via congruence coefficients, which is far more parsimonious than comparing the component structure of all separate groups. In this paper we present a heuristic for this procedure. Afterwards, one can return to the multigroup CFA framework and check whether removing the non-invariant items or removing some of the equality restrictions for these items, yields satisfactory invariance test results. An empirical application concerning cross-cultural emotion data is used to demonstrate that this novel approach is useful and can co-exist with the traditional CFA approaches. PMID:24999335

  14. Partial coverage inspection of corroded engineering components using extreme value analysis

    NASA Astrophysics Data System (ADS)

    Benstock, Daniel; Cegla, Frederic

    2016-02-01

    Ultrasonic thickness C-scans provide information about wall thickness of a component over the entire inspected area. They are performed to determine the condition of a component. However, this is time consuming, expensive and can be unfeasible where access to a component is restricted. The pressure to maximize inspection resources and minimize inspection costs has led to both the development of new sensing technologies and inspection strategies. Partial coverage inspection aims to tackle this challenge by using data from an ultrasonic thickness C-scan of a small fraction of a component's area to extrapolate to the condition of the entire component. Extreme value analysis is a particular tool used in partial coverage inspection. Typical implementations of extreme value analysis partition a thickness map into a number of equally sized blocks and extract the minimum thickness from each block. Extreme value theory provides a limiting form for the probability distribution of this set of minimum thicknesses, from which the parameters of the limiting distribution can be extracted. This distribution provides a statistical model for the minimum thickness in a given area, which can be used for extrapolation. In this paper the basics of extreme value analysis and its assumptions are introduced. We discuss a new method for partitioning a thickness map, based on ensuring that there is evidence that the assumptions of extreme value theory are met by the inspection data. Examples of the implementation of this method are presented on both simulated and experimental data. Further it is shown that realistic predictions can be made from the statistical models developed using this methodology.

  15. Optical methods of stress analysis applied to cracked components

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1991-01-01

    After briefly describing the principles of frozen stress photoelastic and moire interferometric analyses, and the corresponding algorithms for converting optical data from each method into stress intensity factors (SIF), the methods are applied to the determination of crack shapes, SIF determination, crack closure displacement fields, and pre-crack damage mechanisms in typical aircraft component configurations.

  16. Respiratory dose analysis for components of ambient particulate matter

    EPA Science Inventory

    Particulate matter (PM) in the atmosphere is a complex mixture of particles with different sizes and chemical compositions. Although PM is known to induce health effects, specific attributes of PM that may cause health effects are somewhat ambiguous. Dose of each specific compone...

  17. Dynamic substructuring for shock spectrum analysis using component mode synthesis

    NASA Technical Reports Server (NTRS)

    Mcpheeters, Barton W.; Lev, Avivi; Bogert, Philip B.; Scavuzzo, Rudolph J.

    1988-01-01

    Component mode synthesis was used to analyze different types of structures with MSC NASTRAN. The theory and technique of using Multipoint Constraint Equations (MPCs) to connect substructures to each other or to a common foundation is presented. Computation of the dynamic response of the system from shack spectrum inputs was automated using the DMAP programming language of the MSC NASTRAN finite element code.

  18. A Critical Analysis of Football Bowl Subdivision Coaching Contract Components

    ERIC Educational Resources Information Center

    Nichols, Justin Keith

    2012-01-01

    This exploratory study is designed to inventory and analyze contract components used by Football Bowl Subdivision (FBS) institutions in the National Collegiate Athletic Association (NCAA) to further contribute to the body research. The FBS is comprised of 120 institutions and 94 of those institutions submitted contracts to "USA Today"…

  19. A Component Analysis of Schedule Thinning during Functional Communication Training

    ERIC Educational Resources Information Center

    Betz, Alison M.; Fisher, Wayne W.; Roane, Henry S.; Mintz, Joslyn C.; Owen, Todd M.

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can…

  20. Magnetic unmixing of first-order reversal curve diagrams using principal component analysis

    NASA Astrophysics Data System (ADS)

    Lascu, Ioan; Harrison, Richard J.; Li, Yuting; Muraszko, Joy R.; Channell, James E. T.; Piotrowski, Alexander M.; Hodell, David A.

    2015-09-01

    We describe a quantitative magnetic unmixing method based on principal component analysis (PCA) of first-order reversal curve (FORC) diagrams. For PCA, we resample FORC distributions on grids that capture diagnostic signatures of single-domain (SD), pseudosingle-domain (PSD), and multidomain (MD) magnetite, as well as of minerals such as hematite. Individual FORC diagrams are recast as linear combinations of end-member (EM) FORC diagrams, located at user-defined positions in PCA space. The EM selection is guided by constraints derived from physical modeling and imposed by data scatter. We investigate temporal variations of two EMs in bulk North Atlantic sediment cores collected from the Rockall Trough and the Iberian Continental Margin. Sediments from each site contain a mixture of magnetosomes and granulometrically distinct detrital magnetite. We also quantify the spatial variation of three EM components (a coarse silt-sized MD component, a fine silt-sized PSD component, and a mixed clay-sized component containing both SD magnetite and hematite) in surficial sediments along the flow path of the North Atlantic Deep Water (NADW). These samples were separated into granulometric fractions, which helped constrain EM definition. PCA-based unmixing reveals systematic variations in EM relative abundance as a function of distance along NADW flow. Finally, we apply PCA to the combined data set of Rockall Trough and NADW sediments, which can be recast as a four-EM mixture, providing enhanced discrimination between components. Our method forms the foundation of a general solution to the problem of unmixing multicomponent magnetic mixtures, a fundamental task of rock magnetic studies.

  1. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  2. An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Fava, Joseph L.

    1987-01-01

    Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…

  3. Development of an Automated LIBS Analytical Test System Integrated with Component Control and Spectrum Analysis Capabilities

    NASA Astrophysics Data System (ADS)

    Ding, Yu; Tian, Di; Chen, Feipeng; Chen, Pengfei; Qiao, Shujun; Yang, Guang; Li, Chunsheng

    2015-08-01

    The present paper proposes an automated Laser-Induced Breakdown Spectroscopy (LIBS) analytical test system, which consists of a LIBS measurement and control platform based on a modular design concept, and a LIBS qualitative spectrum analysis software and is developed in C#. The platform provides flexible interfacing and automated control; it is compatible with different manufacturer component models and is constructed in modularized form for easy expandability. During peak identification, a more robust peak identification method with improved stability in peak identification has been achieved by applying additional smoothing on the slope obtained by calculation before peak identification. For the purpose of element identification, an improved main lines analysis method, which detects all elements on the spectral peak to avoid omission of certain elements without strong spectral lines, is applied to element identification in the tested LIBS samples. This method also increases the identification speed. In this paper, actual applications have been carried out. According to tests, the analytical test system is compatible with components of various models made by different manufacturers. It can automatically control components to get experimental data and conduct filtering, peak identification and qualitative analysis, etc. on spectral data. supported by the National Major Scientific Instruments and Equipment Development Special Funds of China (No. 2011YQ030113)

  4. Detecting Genomic Signatures of Natural Selection with Principal Component Analysis: Application to the 1000 Genomes Data

    PubMed Central

    Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G.B.

    2016-01-01

    To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. PMID:26715629

  5. Detecting Genomic Signatures of Natural Selection with Principal Component Analysis: Application to the 1000 Genomes Data.

    PubMed

    Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G B

    2016-04-01

    To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. PMID:26715629

  6. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  7. A preliminary component analysis of a Mach-7 hypersonic vehicle

    NASA Technical Reports Server (NTRS)

    1988-01-01

    As research continues in the development of an aircraft capable of travelling at hypersonic flight velocities, both the propulsion and thermal management systems stand out as areas requiring innovative technological breakthroughs. For propulsion, the difficulty involves efficiently compressing and combusting hydrogen in a supersonic stream, i.e., developing a viable scramjet with thermal management, the challenge lies in development of materials and active cooling systems capable of handling the enormous heat fluxes associated with hypersonic flight. This paper focuses on these problems and presents component designs for both an active cooling system and an all-external-compression scramjet. These systems are mated to a Mach-6 passenger cruise aircraft whose aerodynamic configuration was derived from an optimization of NASA windtunnel test results. The following outlines the development of the configuration and then focuses in on the design of the two component systems.

  8. Gas-component analysis of laser fusion targets.

    PubMed

    Schneggenburger, R G; Updegrove, W S; Nolen, R L

    1978-11-01

    A gas-chromatographic method for analyzing the fuel content of laser fusion targets has been developed. It provides information on isotope ratios in the fuel gas, percent of molecular species, and total pressure of fuel gas in individual targets to a limit of 0.2 ng DT. The method can also be used to quantify other gaseous components not active in the thermonuclear process (e.g., H2, He, etc.). PMID:18698997

  9. Analysis of the hadron component in E.A.S.

    NASA Technical Reports Server (NTRS)

    Procureur, J.; Stamenov, J. N.; Stavrev, P. V.; Ushev, S. Z.

    1985-01-01

    Hadrons in extensive air showers (E.A.S.) provide direct information about high energy interactions. As a rule the biases pertaining to different shower array arrangements have a relative large influence for the basic phenomenological characteristics of the E.A.S. hadron component. In this situation, the problem of the correct comparison between model calculated and experimental characteristics is of great importance for the reliability of the derived conclusions about the high energy interaction characteristics.

  10. Structural analysis of ultra-high speed aircraft structural components

    NASA Technical Reports Server (NTRS)

    Lenzen, K. H.; Siegel, W. H.

    1977-01-01

    The buckling characteristics of a hypersonic beaded skin panel were investigated under pure compression with boundary conditions similar to those found in a wing mounted condition. The primary phases of analysis reported include: (1) experimental testing of the panel to failure; (2) finite element structural analysis of the beaded panel with the computer program NASTRAN; and (3) summary of the semiclassical buckling equations for the beaded panel under purely compressive loads. A comparison of each of the analysis methods is also included.

  11. Analysis of complex elastic structures by a Rayleigh-Ritz component modes method using Lagrange multipliers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Klein, L. R.

    1974-01-01

    The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.

  12. Correlation map analysis between appearances of Japanese facial images and amount of melanin and hemoglobin components in the skin

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Uetsuki, Keiji; Ojima, Nobutoshi; Miyake, Yoichi

    2001-06-01

    Skin color reproduction becomes increasingly important with the recent progress in various imaging systems. In this paper, based on subjective experiments, correlation maps are analyzed between appearance of Japanese facial images and amount of melanin and hemoglobin components in the facial skin. Facial color images were taken by digital still camera. The spatial distributions of melanin and hemoglobin components in the facial color image were separated by independent component analysis of skin colors. The separated components were synthesized to simulate the various facial color images by changing the quantities of the two separated pigments. The synthesized images were evaluated subjectively by comparing with the original facial images. From the analysis of correlation map, we could find the visual or psychological terms that are well related to melanin components influence the appearance of facial color image.

  13. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  14. Teacher Perceptions Regarding Portfolio-Based Components of Teacher Evaluations

    ERIC Educational Resources Information Center

    Nagel, Charles I.

    2012-01-01

    This study reports the results of teachers' and principals' perceptions of the package evaluation process, a process that uses a combination of a traditional evaluation with a portfolio-based assessment tool. In addition, this study contributes to the educational knowledge base by exploring the participants' views on the impact of…

  15. Identification of Guideline-Based Components for Innovative Science Curricula.

    ERIC Educational Resources Information Center

    Son, Yeon-A; Pottenger, Francis M., III; Lee, Yang-Rak; Young, Donald B.; Pak, Sung-Jae; Choi, Don-Hyung; Chung, Wan-Ho

    2001-01-01

    Addresses the development of new curricula for science education reform in the hopes of facilitating further development of guideline-based curricula. Examines Korean and U.S. thematic-based and project centers programs. (Contains 39 references.) (Author/YDS)

  16. Component Data Base for Space Station Resistojet Auxiliary Propulsion

    NASA Technical Reports Server (NTRS)

    Bader, Clayton H.

    1988-01-01

    The resistojet was baselined for Space Station auxiliary propulsion because of its operational versatility, efficiency, and durability. This report was conceived as a guide to designers and planners of the Space Station auxiliary propulsion system. It is directed to the low thrust resistojet concept, though it should have application to other station concepts or systems such as the Environmental Control and Life Support System (ECLSS), Manufacturing and Technology Laboratory (MTL), and the Waste Fluid Management System (WFMS). The information will likely be quite useful in the same capacity for other non-Space Station systems including satellite, freeflyers, explorers, and maneuvering vehicles. The report is a catalog of the most useful information for the most significant feed system components and is organized for the greatest convenience of the user.

  17. Automatic Denoising of Functional MRI Data: Combining Independent Component Analysis and Hierarchical Fusion of Classifiers

    PubMed Central

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-01-01

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject “at rest”). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing “signal” (brain activity) can be distinguished form the “noise” components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX (“FMRIB’s ICA-based X-noiseifier”), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different Classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of

  18. Auxiliary function approach to independent component analysis and independent vector analysis

    NASA Astrophysics Data System (ADS)

    Ono, N.

    2015-05-01

    In this paper, we review an auxiliary function approach to independent component analysis (ICA) and independent vector analysis (IVA). The derived algorithm consists of two alternative updates: 1) weighted covariance matrix update and 2) demixing matrix update, which include no tuning parameters such as a step size in the gradient descent method. The monotonic decrease of the objective function is guaranteed by the principle of the auxiliary function method. The experimental evaluation shows that the derived update rules yield faster convergence and better results than natural gradient updates. An efficient implementation on a mobile phone is also presented.

  19. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  20. RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.

    PubMed

    Dang, Chinh; Radha, Hayder

    2015-11-01

    Key frame extraction algorithms consider the problem of selecting a subset of the most informative frames from a video to summarize its content. Several applications, such as video summarization, search, indexing, and prints from video, can benefit from extracted key frames of the video under consideration. Most approaches in this class of algorithms work directly with the input video data set, without considering the underlying low-rank structure of the data set. Other algorithms exploit the low-rank component only, ignoring the other key information in the video. In this paper, a novel key frame extraction framework based on robust principal component analysis (RPCA) is proposed. Furthermore, we target the challenging application of extracting key frames from unstructured consumer videos. The proposed framework is motivated by the observation that the RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each element in the same data set. The two information types are combined into a single l1-norm-based non-convex optimization problem to extract the desired number of key frames. Moreover, we develop a novel iterative algorithm to solve this optimization problem. The proposed RPCA-based framework does not require shot(s) detection, segmentation, or semantic understanding of the underlying video. Finally, experiments are performed on a variety of consumer and other types of videos. A comparison of the results obtained by our method with the ground truth and with related state-of-the-art algorithms clearly illustrates the viability of the proposed RPCA-based framework. PMID:26087486