Sample records for component analysis based

  1. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  2. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  3. CLUSFAVOR 5.0: hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles

    PubMed Central

    Peterson, Leif E

    2002-01-01

    CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816

  4. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  5. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  7. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  8. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.

  9. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  10. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  11. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  12. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  13. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  14. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  15. Development and application of a time-history analysis for rotorcraft dynamics based on a component approach

    NASA Technical Reports Server (NTRS)

    Sopher, R.; Hallock, D. W.

    1985-01-01

    A time history analysis for rotorcraft dynamics based on dynamical substructures, and nonstructural mathematical and aerodynamic components is described. The analysis is applied to predict helicopter ground resonance and response to rotor damage. Other applications illustrate the stability and steady vibratory response of stopped and gimballed rotors, representative of new technology. Desirable attributes expected from modern codes are realized, although the analysis does not employ a complete set of techniques identified for advanced software. The analysis is able to handle a comprehensive set of steady state and stability problems with a small library of components.

  16. Simultaneous fingerprint, quantitative analysis and anti-oxidative based screening of components in Rhizoma Smilacis Glabrae using liquid chromatography coupled with Charged Aerosol and Coulometric array Detection.

    PubMed

    Yang, Guang; Zhao, Xin; Wen, Jun; Zhou, Tingting; Fan, Guorong

    2017-04-01

    An analytical approach including fingerprint, quantitative analysis and rapid screening of anti-oxidative components was established and successfully applied for the comprehensive quality control of Rhizoma Smilacis Glabrae (RSG), a well-known Traditional Chinese Medicine with the homology of medicine and food. Thirteen components were tentatively identified based on their retention behavior, UV absorption and MS fragmentation patterns. Chemometric analysis based on coulmetric array data was performed to evaluate the similarity and variation between fifteen batches. Eight discriminating components were quantified using single-compound calibration. The unit responses of those components in coulmetric array detection were calculated and compared with those of several compounds reported to possess antioxidant activity, and four of them were tentatively identified as main contributors to the total anti-oxidative activity. The main advantage of the proposed approach was that it realized simultaneous fingerprint, quantitative analysis and screening of anti-oxidative components, providing comprehensive information for quality assessment of RSG. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Least-dependent-component analysis based on mutual information

    NASA Astrophysics Data System (ADS)

    Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter

    2004-12-01

    We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.

  18. How do components of evidence-based psychological treatment cluster in practice? A survey and cluster analysis.

    PubMed

    Gifford, Elizabeth V; Tavakoli, Sara; Weingardt, Kenneth R; Finney, John W; Pierson, Heather M; Rosen, Craig S; Hagedorn, Hildi J; Cook, Joan M; Curran, Geoff M

    2012-01-01

    Evidence-based psychological treatments (EBPTs) are clusters of interventions, but it is unclear how providers actually implement these clusters in practice. A disaggregated measure of EBPTs was developed to characterize clinicians' component-level evidence-based practices and to examine relationships among these practices. Survey items captured components of evidence-based treatments based on treatment integrity measures. The Web-based survey was conducted with 75 U.S. Department of Veterans Affairs (VA) substance use disorder (SUD) practitioners and 149 non-VA community-based SUD practitioners. Clinician's self-designated treatment orientations were positively related to their endorsement of those EBPT components; however, clinicians used components from a variety of EBPTs. Hierarchical cluster analysis indicated that clinicians combined and organized interventions from cognitive-behavioral therapy, the community reinforcement approach, motivational interviewing, structured family and couples therapy, 12-step facilitation, and contingency management into clusters including empathy and support, treatment engagement and activation, abstinence initiation, and recovery maintenance. Understanding how clinicians use EBPT components may lead to improved evidence-based practice dissemination and implementation. Published by Elsevier Inc.

  19. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  20. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  1. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  2. Meta-Analysis of Mathematic Basic-Fact Fluency Interventions: A Component Analysis

    ERIC Educational Resources Information Center

    Codding, Robin S.; Burns, Matthew K.; Lukito, Gracia

    2011-01-01

    Mathematics fluency is a critical component of mathematics learning yet few attempts have been made to synthesize this research base. Seventeen single-case design studies with 55 participants were reviewed using meta-analytic procedures. A component analysis of practice elements was conducted and treatment intensity and feasibility were examined.…

  3. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    NASA Astrophysics Data System (ADS)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  4. Independent component analysis for automatic note extraction from musical trills

    NASA Astrophysics Data System (ADS)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  5. Text Analysis: Critical Component of Planning for Text-Based Discussion Focused on Comprehension of Informational Texts

    ERIC Educational Resources Information Center

    Kucan, Linda; Palincsar, Annemarie Sullivan

    2018-01-01

    This investigation focuses on a tool used in a reading methods course to introduce reading specialist candidates to text analysis as a critical component of planning for text-based discussions. Unlike planning that focuses mainly on important text content or information, a text analysis approach focuses both on content and how that content is…

  6. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  7. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-moa; Kwak, Dochan (Technical Monitor)

    2002-01-01

    In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.

  8. A comparison between plaque-based and vessel-based measurement for plaque component using volumetric intravascular ultrasound radiofrequency data analysis.

    PubMed

    Shin, Eun-Seok; Garcia-Garcia, Hector M; Garg, Scot; Serruys, Patrick W

    2011-04-01

    Although percent plaque components on plaque-based measurement have been used traditionally in previous studies, the impact of vessel-based measurement for percent plaque components have yet to be studied. The purpose of this study was therefore to correlate percent plaque components derived by plaque- and vessel-based measurement using intravascular ultrasound virtual histology (IVUS-VH). The patient cohort comprised of 206 patients with de novo coronary artery lesions who were imaged with IVUS-VH. Age ranged from 35 to 88 years old, and 124 patients were male. Whole pullback analysis was used to calculate plaque volume, vessel volume, and absolute and percent volumes of fibrous, fibrofatty, necrotic core, and dense calcium. The plaque and vessel volumes were well correlated (r = 0.893, P < 0.001). There was a strong correlation between percent plaque components volumes calculated by plaque and those calculated by vessel volumes (fibrous; r = 0.927, P < 0.001, fibrofatty; r = 0.972, P < 0.001, necrotic core; r = 0.964, P < 0.001, dense calcium; r = 0.980, P < 0.001,). Plaque and vessel volumes correlated well to the overall plaque burden. For percent plaque component volume, plaque-based measurement was also highly correlated with vessel-based measurement. Therefore, the percent plaque component volume calculated by vessel volume could be used instead of the conventional percent plaque component volume calculated by plaque volume.

  9. Generalized Structured Component Analysis with Latent Interactions

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  10. Regularized Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  11. The use of principal component and cluster analysis to differentiate banana peel flours based on their starch and dietary fibre components.

    PubMed

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-08-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food.

  12. The Use of Principal Component and Cluster Analysis to Differentiate Banana Peel Flours Based on Their Starch and Dietary Fibre Components

    PubMed Central

    Ramli, Saifullah; Ismail, Noryati; Alkarkhi, Abbas Fadhl Mubarek; Easa, Azhar Mat

    2010-01-01

    Banana peel flour (BPF) prepared from green or ripe Cavendish and Dream banana fruits were assessed for their total starch (TS), digestible starch (DS), resistant starch (RS), total dietary fibre (TDF), soluble dietary fibre (SDF) and insoluble dietary fibre (IDF). Principal component analysis (PCA) identified that only 1 component was responsible for 93.74% of the total variance in the starch and dietary fibre components that differentiated ripe and green banana flours. Cluster analysis (CA) applied to similar data obtained two statistically significant clusters (green and ripe bananas) to indicate difference in behaviours according to the stages of ripeness based on starch and dietary fibre components. We concluded that the starch and dietary fibre components could be used to discriminate between flours prepared from peels obtained from fruits of different ripeness. The results were also suggestive of the potential of green and ripe BPF as functional ingredients in food. PMID:24575193

  13. A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use

    PubMed Central

    Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien

    2016-01-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  14. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  15. An Inquiry-Based Project Focused on the X-Ray Powder Diffraction Analysis of Common Household Solids

    ERIC Educational Resources Information Center

    Hulien, Molly L.; Lekse, Jonathan W.; Rosmus, Kimberly A.; Devlin, Kasey P.; Glenn, Jennifer R.; Wisneski, Stephen D.; Wildfong, Peter; Lake, Charles H.; MacNeil, Joseph H.; Aitken, Jennifer A.

    2015-01-01

    While X-ray powder diffraction (XRPD) is a fundamental analytical technique used by solid-state laboratories across a breadth of disciplines, it is still underrepresented in most undergraduate curricula. In this work, we incorporate XRPD analysis into an inquiry-based project that requires students to identify the crystalline component(s) of…

  16. SaaS Platform for Time Series Data Handling

    NASA Astrophysics Data System (ADS)

    Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail

    2018-02-01

    The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.

  17. Independent component analysis based digital signal processing in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi

    2018-02-01

    In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.

  18. Separation of β-amyloid binding and white matter uptake of 18F-flutemetamol using spectral analysis

    PubMed Central

    Heurling, Kerstin; Buckley, Christopher; Vandenberghe, Rik; Laere, Koen Van; Lubberink, Mark

    2015-01-01

    The kinetic components of the β-amyloid ligand 18F-flutemetamol binding in grey and white matter were investigated through spectral analysis, and a method developed for creation of parametric images separating grey and white matter uptake. Tracer uptake in grey and white matter and cerebellar cortex was analyzed through spectral analysis in six subjects, with (n=4) or without (n=2) apparent β-amyloid deposition, having undergone dynamic 18F-flutemetamol scanning with arterial blood sampling. The spectra were divided into three components: slow, intermediate and fast basis function rates. The contribution of each of the components to total volume of distribution (VT) was assessed for different tissue types. The slow component dominated in white matter (average 90%), had a higher contribution to grey matter VT in subjects with β-amyloid deposition (average 44%) than without (average 6%) and was absent in cerebellar cortex, attributing the slow component of 18F-flutemetamol uptake in grey matter to β-amyloid binding. Parametric images of voxel-based spectral analysis were created for VT, the slow component and images segmented based on the slow component contribution; confirming that grey matter and white matter uptake can be discriminated on voxel-level using a threshold for the contribution from the slow component to VT. PMID:26550542

  19. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    ERIC Educational Resources Information Center

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  20. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  1. Integrated Droplet-Based Microextraction with ESI-MS for Removal of Matrix Interference in Single-Cell Analysis.

    PubMed

    Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong

    2016-04-29

    Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.

  2. Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong

    2010-03-01

    The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.

  3. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  5. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  6. Exergo-Economic Analysis of an Experimental Aircraft Turboprop Engine Under Low Torque Condition

    NASA Astrophysics Data System (ADS)

    Atilgan, Ramazan; Turan, Onder; Aydin, Hakan

    Exergo-economic analysis is an unique combination of exergy analysis and cost analysis conducted at the component level. In exergo-economic analysis, cost of each exergy stream is determined. Inlet and outlet exergy streams of the each component are associated to a monetary cost. This is essential to detect cost-ineffective processes and identify technical options which could improve the cost effectiveness of the overall energy system. In this study, exergo-economic analysis is applied to an aircraft turboprop engine. Analysis is based on experimental values at low torque condition (240 N m). Main components of investigated turboprop engine are the compressor, the combustor, the gas generator turbine, the free power turbine and the exhaust. Cost balance equations have been formed for all components individually and exergo-economic parameters including cost rates and unit exergy costs have been calculated for each component.

  7. Data-Based Locally Directed Evaluation of Vocational Education Programs. Component 5. Analysis of Community Resources Utilization.

    ERIC Educational Resources Information Center

    Florida State Univ., Tallahassee. Program of Vocational Education.

    Part of a system by which local education agency (LEA) personnel may evaluate secondary and postsecondary vocational education programs, this fifth of eight components focuses on an analysis of the utilization of community resources. Utilization of the component is designed to open communication channels among all segments of the community so that…

  8. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    PubMed Central

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540

  9. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  10. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  11. The Influence Of Component Alignment On The Life Of Total Knee Prostheses

    NASA Astrophysics Data System (ADS)

    Bugariu, Delia; Bereteu, Liviu

    2012-12-01

    An arthritic knee affects the patient's life by causing pain and limiting movement. If the cartilage and the bone surfaces are severely affected, the natural joint is replaced with an artificial joint. The procedure is called total knee arthroplasty (TKA). Lately, the numbers of implanted total knee prostheses grow steadily. An important factor in TKA is the perfect alignment of the total knee prosthesis (TKP) components. Component misalignment can lead to the prosthesis loss by producing wear particles. The paper proposes a study on mechanical behaviors of a TKP based on numerical analysis, using ANSYS software. The numerical analysis is based on both the normal and the changed angle of the components alignment.

  12. Analysis of metabolic syndrome components in >15 000 african americans identifies pleiotropic variants: results from the population architecture using genomics and epidemiology study.

    PubMed

    Carty, Cara L; Bhattacharjee, Samsiddhi; Haessler, Jeff; Cheng, Iona; Hindorff, Lucia A; Aroda, Vanita; Carlson, Christopher S; Hsu, Chun-Nan; Wilkens, Lynne; Liu, Simin; Selvin, Elizabeth; Jackson, Rebecca; North, Kari E; Peters, Ulrike; Pankow, James S; Chatterjee, Nilanjan; Kooperberg, Charles

    2014-08-01

    Metabolic syndrome (MetS) refers to the clustering of cardiometabolic risk factors, including dyslipidemia, central adiposity, hypertension, and hyperglycemia, in individuals. Identification of pleiotropic genetic factors associated with MetS traits may shed light on key pathways or mediators underlying MetS. Using the Metabochip array in 15 148 African Americans from the Population Architecture using Genomics and Epidemiology (PAGE) study, we identify susceptibility loci and investigate pleiotropy among genetic variants using a subset-based meta-analysis method, ASsociation-analysis-based-on-subSETs (ASSET). Unlike conventional models that lack power when associations for MetS components are null or have opposite effects, Association-analysis-based-on-subsets uses 1-sided tests to detect positive and negative associations for components separately and combines tests accounting for correlations among components. With Association-analysis-based-on-subsets, we identify 27 single nucleotide polymorphisms in 1 glucose and 4 lipids loci (TCF7L2, LPL, APOA5, CETP, and APOC1/APOE/TOMM40) significantly associated with MetS components overall, all P<2.5e-7, the Bonferroni adjusted P value. Three loci replicate in a Hispanic population, n=5172. A novel African American-specific variant, rs12721054/APOC1, and rs10096633/LPL are associated with ≥3 MetS components. We find additional evidence of pleiotropy for APOE, TOMM40, TCF7L2, and CETP variants, many with opposing effects (eg, the same rs7901695/TCF7L2 allele is associated with increased odds of high glucose and decreased odds of central adiposity). We highlight a method to increase power in large-scale genomic association analyses and report a novel variant associated with all MetS components in African Americans. We also identify pleiotropic associations that may be clinically useful in patient risk profiling and for informing translational research of potential gene targets and medications. © 2014 American Heart Association, Inc.

  13. The Distressed Brain: A Group Blind Source Separation Analysis on Tinnitus

    PubMed Central

    De Ridder, Dirk; Vanneste, Sven; Congedo, Marco

    2011-01-01

    Background Tinnitus, the perception of a sound without an external sound source, can lead to variable amounts of distress. Methodology In a group of tinnitus patients with variable amounts of tinnitus related distress, as measured by the Tinnitus Questionnaire (TQ), an electroencephalography (EEG) is performed, evaluating the patients' resting state electrical brain activity. This resting state electrical activity is compared with a control group and between patients with low (N = 30) and high distress (N = 25). The groups are homogeneous for tinnitus type, tinnitus duration or tinnitus laterality. A group blind source separation (BSS) analysis is performed using a large normative sample (N = 84), generating seven normative components to which high and low tinnitus patients are compared. A correlation analysis of the obtained normative components' relative power and distress is performed. Furthermore, the functional connectivity as reflected by lagged phase synchronization is analyzed between the brain areas defined by the components. Finally, a group BSS analysis on the Tinnitus group as a whole is performed. Conclusions Tinnitus can be characterized by at least four BSS components, two of which are posterior cingulate based, one based on the subgenual anterior cingulate and one based on the parahippocampus. Only the subgenual component correlates with distress. When performed on a normative sample, group BSS reveals that distress is characterized by two anterior cingulate based components. Spectral analysis of these components demonstrates that distress in tinnitus is related to alpha and beta changes in a network consisting of the subgenual anterior cingulate cortex extending to the pregenual and dorsal anterior cingulate cortex as well as the ventromedial prefrontal cortex/orbitofrontal cortex, insula, and parahippocampus. This network overlaps partially with brain areas implicated in distress in patients suffering from pain, functional somatic syndromes and posttraumatic stress disorder, and might therefore represent a specific distress network. PMID:21998628

  14. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  15. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  16. Biochemical component identification by plasmonic improved whispering gallery mode optical resonance based sensor

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-05-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins, microelements, antibiotic of different generation etc. in both single and multi component solutions under varied in wide range concentration analyzed on the light scattering parameters of whispering gallery mode optical resonance based sensor are represented. Multiplexing on parameters and components has been realized using developed fluidic sensor cell with fixed in adhesive layer dielectric microspheres and data processing. Biochemical component identification has been performed by developed network analysis techniques. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis. Novel technique based on optical resonance on microring structures, plasmon resonance and identification tools has been developed. To improve a sensitivity of microring structures microspheres fixed by adhesive had been treated previously by gold nanoparticle solution. Another technique used thin film gold layers deposited on the substrate below adhesive. Both biomolecule and nanoparticle injections caused considerable changes of optical resonance spectra. Plasmonic gold layers under optimized thickness also improve parameters of optical resonance spectra. Biochemical component identification has been also performed by developed network analysis techniques both for single and for multi component solution. So advantages of plasmon enhancing optical microcavity resonance with multiparameter identification tools is used for development of a new platform for ultra sensitive label-free biomedical sensor.

  17. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    ERIC Educational Resources Information Center

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  18. A Study on Components of Internal Control-Based Administrative System in Secondary Schools

    ERIC Educational Resources Information Center

    Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda

    2015-01-01

    The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…

  19. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  20. Maximum flow-based resilience analysis: From component to system

    PubMed Central

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  1. Selection of independent components based on cortical mapping of electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  2. Stress analysis of 27% scale model of AH-64 main rotor hub

    NASA Technical Reports Server (NTRS)

    Hodges, R. V.

    1985-01-01

    Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.

  3. Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.

    PubMed

    Kiran Kumar, G R; Reddy, M Ramasubba

    2018-06-08

    Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.

  4. Characterization of DOM adsorption of CNTs by using excitation-emission matrix fluorescence spectroscopy and multiway analysis.

    PubMed

    Peng, Mingguo; Li, Huajie; Li, Dongdong; Du, Erdeng; Li, Zhihong

    2017-06-01

    Carbon nanotubes (CNTs) were utilized to adsorb DOM in micro-polluted water. The characteristics of DOM adsorption on CNTs were investigated based on UV 254 , TOC, and fluorescence spectrum measurements. Based on PARAFAC (parallel factor) analysis, four fluorescent components were extracted, including one protein-like component (C4) and three humic acid-like components (C1, C2, and C3). The adsorption isotherms, kinetics, and thermodynamics of DOM adsorption on CNTs were further investigated. A Freundlich isotherm model fit the adsorption data well with high values of correlation. As a type of macro-porous and meso-porous adsorbent, CNTs preferably adsorb humic acid-like substances rather than protein-like substances. The increasing temperature will speed up the adsorption process. The self-organizing map (SOM) analysis further explains the fluorescent properties of water samples. The results provide a new insight into the adsorption behaviour of DOM fluorescent components on CNTs.

  5. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    NASA Astrophysics Data System (ADS)

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba

    2015-12-01

    The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.

  7. Exploring the molecular mechanisms of Traditional Chinese Medicine components using gene expression signatures and connectivity map.

    PubMed

    Yoo, Minjae; Shin, Jimin; Kim, Hyunmin; Kim, Jihye; Kang, Jaewoo; Tan, Aik Choon

    2018-04-04

    Traditional Chinese Medicine (TCM) has been practiced over thousands of years in China and other Asian countries for treating various symptoms and diseases. However, the underlying molecular mechanisms of TCM are poorly understood, partly due to the "multi-component, multi-target" nature of TCM. To uncover the molecular mechanisms of TCM, we perform comprehensive gene expression analysis using connectivity map. We interrogated gene expression signatures obtained 102 TCM components using the next generation Connectivity Map (CMap) resource. We performed systematic data mining and analysis on the mechanism of action (MoA) of these TCM components based on the CMap results. We clustered the 102 TCM components into four groups based on their MoAs using next generation CMap resource. We performed gene set enrichment analysis on these components to provide additional supports for explaining these molecular mechanisms. We also provided literature evidence to validate the MoAs identified through this bioinformatics analysis. Finally, we developed the Traditional Chinese Medicine Drug Repurposing Hub (TCM Hub) - a connectivity map resource to facilitate the elucidation of TCM MoA for drug repurposing research. TCMHub is freely available in http://tanlab.ucdenver.edu/TCMHub. Molecular mechanisms of TCM could be uncovered by using gene expression signatures and connectivity map. Through this analysis, we identified many of the TCM components possess diverse MoAs, this may explain the applications of TCM in treating various symptoms and diseases. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  9. Analysis of complex elastic structures by a Rayleigh-Ritz component modes method using Lagrange multipliers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Klein, L. R.

    1974-01-01

    The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.

  10. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  11. On-Line Analysis of Southern FIA Data

    Treesearch

    Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch

    2006-01-01

    The Southern On-Line Estimator (SOLE) is a web-based FIA database analysis tool designed with an emphasis on modularity. The Java-based user interface is simple and intuitive to use and the R-based analysis engine is fast and stable. Each component of the program (data retrieval, statistical analysis and output) can be individually modified to accommodate major...

  12. Rapid Characterization of Components in Bolbostemma paniculatum by UPLC/LTQ-Orbitrap MSn Analysis and Multivariate Statistical Analysis for Herb Discrimination.

    PubMed

    Zeng, Yanling; Lu, Yang; Chen, Zhao; Tan, Jiawei; Bai, Jie; Li, Pengyue; Wang, Zhixin; Du, Shouying

    2018-05-11

    Bolbostemma paniculatum is a traditional Chinese medicine (TCM) showed various therapeutic effects. Owing to its complex chemical composition, few investigations have acquired a comprehensive cognition for the chemical profiles of this herb and explicated the differences between samples collected from different places. In this study, a strategy based on UPLC tandem LTQ-Orbitrap MS n was established for characterizing chemical components of B. paniculatum . Through a systematic identification strategy, a total of 60 components in B. paniculatum were rapidly separated in 30 min and identified. Then based on peak intensities of all the characterized components, principle component analysis (PCA) and hierarchical cluster analysis (HCA) were employed to classify 18 batches of B. paniculatum into four groups, which were highly consistent with the four climate types of their original places. And five compounds were finally screened out as chemical markers to discriminate the internal quality of B. paniculatum . As the first study to systematically characterize the chemical components of B. paniculatum by UPLC-MS n , the above results could offer essential data for its pharmacological research. And the current strategy could provide useful reference for future investigations on discovery of important chemical constituents in TCM, as well as establishment of quality control and evaluation method.

  13. Independent component analysis based channel equalization for 6 × 6 MIMO-OFDM transmission over few-mode fiber.

    PubMed

    He, Zhixue; Li, Xiang; Luo, Ming; Hu, Rong; Li, Cai; Qiu, Ying; Fu, Songnian; Yang, Qi; Yu, Shaohua

    2016-05-02

    We propose and experimentally demonstrate two independent component analysis (ICA) based channel equalizers (CEs) for 6 × 6 MIMO-OFDM transmission over few-mode fiber. Compared with the conventional channel equalizer based on training symbols (TSs-CE), the proposed two ICA-based channel equalizers (ICA-CE-I and ICA-CE-II) can achieve comparable performances, while requiring much less training symbols. Consequently, the overheads for channel equalization can be substantially reduced from 13.7% to 0.4% and 2.6%, respectively. Meanwhile, we also experimentally investigate the convergence speed of the proposed ICA-based CEs.

  14. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Mao; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This is an old issue in computational fluid dynamics (CFD). What is the so-called contravariant velocity or contravariant velocity component? In the article, we review the basics of tensor analysis and give the contravariant velocity component a rigorous explanation. For a given coordinate system, there exist two uniquely determined sets of base vector systems - one is the covariant and another is the contravariant base vector system. The two base vector systems are reciprocal. The so-called contravariant velocity component is really the contravariant component of a velocity vector for a time-independent coordinate system, or the contravariant component of a relative velocity between fluid and coordinates, for a time-dependent coordinate system. The contravariant velocity components are not physical quantities of the velocity vector. Their magnitudes, dimensions, and associated directions are controlled by their corresponding covariant base vectors. Several 2-D (two-dimensional) linear examples and 2-D mass-conservation equation are used to illustrate the details of expressing a vector with respect to the covariant and contravariant base vector systems, respectively.

  15. The Evaluation and Research of Multi-Project Programs: Program Component Analysis.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    1977-01-01

    It is difficult to base evaluations on concepts irrelevant to state policy making. Evaluation of a multiproject program requires both time and differentiation of method. Data from the California Early Childhood Program illustrate process variables for program component analysis, and research questions for intraprogram comparison. (CP)

  16. SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.

    PubMed

    Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan

    2017-09-01

    With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  18. Capturing multidimensionality in stroke aphasia: mapping principal behavioural components to neural structures

    PubMed Central

    Butler, Rebecca A.

    2014-01-01

    Stroke aphasia is a multidimensional disorder in which patient profiles reflect variation along multiple behavioural continua. We present a novel approach to separating the principal aspects of chronic aphasic performance and isolating their neural bases. Principal components analysis was used to extract core factors underlying performance of 31 participants with chronic stroke aphasia on a large, detailed battery of behavioural assessments. The rotated principle components analysis revealed three key factors, which we labelled as phonology, semantic and executive/cognition on the basis of the common elements in the tests that loaded most strongly on each component. The phonology factor explained the most variance, followed by the semantic factor and then the executive-cognition factor. The use of principle components analysis rendered participants’ scores on these three factors orthogonal and therefore ideal for use as simultaneous continuous predictors in a voxel-based correlational methodology analysis of high resolution structural scans. Phonological processing ability was uniquely related to left posterior perisylvian regions including Heschl’s gyrus, posterior middle and superior temporal gyri and superior temporal sulcus, as well as the white matter underlying the posterior superior temporal gyrus. The semantic factor was uniquely related to left anterior middle temporal gyrus and the underlying temporal stem. The executive-cognition factor was not correlated selectively with the structural integrity of any particular region, as might be expected in light of the widely-distributed and multi-functional nature of the regions that support executive functions. The identified phonological and semantic areas align well with those highlighted by other methodologies such as functional neuroimaging and neurostimulation. The use of principle components analysis allowed us to characterize the neural bases of participants’ behavioural performance more robustly and selectively than the use of raw assessment scores or diagnostic classifications because principle components analysis extracts statistically unique, orthogonal behavioural components of interest. As such, in addition to improving our understanding of lesion–symptom mapping in stroke aphasia, the same approach could be used to clarify brain–behaviour relationships in other neurological disorders. PMID:25348632

  19. A Profile-Based Framework for Factorial Similarity and the Congruence Coefficient.

    PubMed

    Hartley, Anselma G; Furr, R Michael

    2017-01-01

    We present a novel profile-based framework for understanding factorial similarity in the context of exploratory factor analysis in general, and for understanding the congruence coefficient (a commonly used index of factor similarity) specifically. First, we introduce the profile-based framework articulating factorial similarity in terms of 3 intuitive components: general saturation similarity, differential saturation similarity, and configural similarity. We then articulate the congruence coefficient in terms of these components, along with 2 additional profile-based components, and we explain how these components resolve ambiguities that can be-and are-found when using the congruence coefficient. Finally, we present secondary analyses revealing that profile-based components of factorial are indeed linked to experts' actual evaluations of factorial similarity. Overall, the profile-based approach we present offers new insights into the ways in which researchers can examine factor similarity and holds the potential to enhance researchers' ability to understand the congruence coefficient.

  20. Rapidly differentiating grape seeds from different sources based on characteristic fingerprints using direct analysis in real time coupled with time-of-flight mass spectrometry combined with chemometrics.

    PubMed

    Song, Yuqiao; Liao, Jie; Dong, Junxing; Chen, Li

    2015-09-01

    The seeds of grapevine (Vitis vinifera) are a byproduct of wine production. To examine the potential value of grape seeds, grape seeds from seven sources were subjected to fingerprinting using direct analysis in real time coupled with time-of-flight mass spectrometry combined with chemometrics. Firstly, we listed all reported components (56 components) from grape seeds and calculated the precise m/z values of the deprotonated ions [M-H](-) . Secondly, the experimental conditions were systematically optimized based on the peak areas of total ion chromatograms of the samples. Thirdly, the seven grape seed samples were examined using the optimized method. Information about 20 grape seed components was utilized to represent characteristic fingerprints. Finally, hierarchical clustering analysis and principal component analysis were performed to analyze the data. Grape seeds from seven different sources were classified into two clusters; hierarchical clustering analysis and principal component analysis yielded similar results. The results of this study lay the foundation for appropriate utilization and exploitation of grape seed samples. Due to the absence of complicated sample preparation methods and chromatographic separation, the method developed in this study represents one of the simplest and least time-consuming methods for grape seed fingerprinting. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Research on Air Quality Evaluation based on Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan

    2018-01-01

    Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.

  2. The Potential of Multivariate Analysis in Assessing Students' Attitude to Curriculum Subjects

    ERIC Educational Resources Information Center

    Gaotlhobogwe, Michael; Laugharne, Janet; Durance, Isabelle

    2011-01-01

    Background: Understanding student attitudes to curriculum subjects is central to providing evidence-based options to policy makers in education. Purpose: We illustrate how quantitative approaches used in the social sciences and based on multivariate analysis (categorical Principal Components Analysis, Clustering Analysis and General Linear…

  3. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  4. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  5. Chemical Discrimination of Cortex Phellodendri amurensis and Cortex Phellodendri chinensis by Multivariate Analysis Approach.

    PubMed

    Sun, Hui; Wang, Huiyu; Zhang, Aihua; Yan, Guangli; Han, Ying; Li, Yuan; Wu, Xiuhong; Meng, Xiangcai; Wang, Xijun

    2016-01-01

    As herbal medicines have an important position in health care systems worldwide, their current assessment, and quality control are a major bottleneck. Cortex Phellodendri chinensis (CPC) and Cortex Phellodendri amurensis (CPA) are widely used in China, however, how to identify species of CPA and CPC has become urgent. In this study, multivariate analysis approach was performed to the investigation of chemical discrimination of CPA and CPC. Principal component analysis showed that two herbs could be separated clearly. The chemical markers such as berberine, palmatine, phellodendrine, magnoflorine, obacunone, and obaculactone were identified through the orthogonal partial least squared discriminant analysis, and were identified tentatively by the accurate mass of quadruple-time-of-flight mass spectrometry. A total of 29 components can be used as the chemical markers for discrimination of CPA and CPC. Of them, phellodenrine is significantly higher in CPC than that of CPA, whereas obacunone and obaculactone are significantly higher in CPA than that of CPC. The present study proves that multivariate analysis approach based chemical analysis greatly contributes to the investigation of CPA and CPC, and showed that the identified chemical markers as a whole should be used to discriminate the two herbal medicines, and simultaneously the results also provided chemical information for their quality assessment. Multivariate analysis approach was performed to the investigate the herbal medicineThe chemical markers were identified through multivariate analysis approachA total of 29 components can be used as the chemical markers. UPLC-Q/TOF-MS-based multivariate analysis method for the herbal medicine samples Abbreviations used: CPC: Cortex Phellodendri chinensis, CPA: Cortex Phellodendri amurensis, PCA: Principal component analysis, OPLS-DA: Orthogonal partial least squares discriminant analysis, BPI: Base peaks ion intensity.

  6. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  7. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  8. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  9. The Layer-Based, Pragmatic Model of the Communication Process.

    ERIC Educational Resources Information Center

    Targowski, Andrew S.; Bowman, Joel P.

    1988-01-01

    Presents the Targowski/Bowman model of the communication process, which introduces a new paradigm that isolates the various components for individual measurement and analysis, places these components into a unified whole, and places communication and its business component into a larger cultural context. (MM)

  10. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  11. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. New preparation method of {beta}{double_prime}-alumina and application for AMTEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishi, Toshiro; Tsuru, Yasuhiko; Yamamoto, Hirokazu

    1995-12-31

    The Alkali Metal Thermo-Electric Converter(AMTEC) is an energy conversion system that converts heat to electrical energy with high efficiency. The {beta}{double_prime}-alumina solid electrolyte (BASE) is the most important component in the AMTEC system. In this paper, the relationship among the conduction property, the microstructure and the amount of chemical component for BASE is studied. As an analysis of the chemical reaction for each component, the authors established a new BASE preparation method rather than using the conventional method. They also report the AMTFC cell performance using this electrolyte tube on which Mo or TiC electrode is filmed by the screenmore » printing method. Then, an electrochemical analysis and a heat cycle test of AMTEC cell are studied.« less

  13. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    DOE PAGES

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; ...

    2015-12-23

    The advance of the scientific discovery process is accomplished by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally,more » it is important for scientists to be able to share their workflows with collaborators. Moreover we have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC), the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In our paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.« less

  14. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  15. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  16. 10 CFR 52.157 - Contents of applications; technical information in final safety analysis report.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... analysis of the structures, systems, and components of the reactor to be manufactured, with emphasis upon... assumed for this evaluation should be based upon a major accident, hypothesized for purposes of site... structures, systems, and components with the objective of assessing the risk to public health and safety...

  17. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  18. [Applications of three-dimensional fluorescence spectrum of dissolved organic matter to identification of red tide algae].

    PubMed

    Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao

    2011-01-01

    The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.

  19. [New method of mixed gas infrared spectrum analysis based on SVM].

    PubMed

    Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua

    2007-07-01

    A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.

  20. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  1. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  2. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  3. Simultaneous quantitation of 14 active components in Yinchenhao decoction by using ultra high performance liquid chromatography with diode array detection: Method development and ingredient analysis of different commonly prepared samples.

    PubMed

    Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin

    2016-11-01

    We developed a novel quantitative analysis method based on ultra high performance liquid chromatography coupled with diode array detection for the simultaneous determination of the 14 main active components in Yinchenhao decoction. All components were separated on an Agilent SB-C18 column by using a gradient solvent system of acetonitrile/0.1% phosphoric acid solution at a flow rate of 0.4 mL/min for 35 min. Subsequently, linearity, precision, repeatability, and accuracy tests were implemented to validate the method. Furthermore, the method has been applied for compositional difference analysis of 14 components in eight normal-extraction Yinchenhao decoction samples, accompanied by hierarchical clustering analysis and similarity analysis. The result that all samples were divided into three groups based on different contents of components demonstrated that extraction methods of decocting, refluxing, ultrasonication and extraction solvents of water or ethanol affected component differentiation, and should be related to its clinical applications. The results also indicated that the sample prepared by patients in the family by using water extraction employing a casserole was almost same to that prepared using a stainless-steel kettle, which is mostly used in pharmaceutical factories. This research would help patients to select the best and most convenient method for preparing Yinchenhao decoction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. [Study on ecological suitability regionalization of Eucommia ulmoides in Guizhou].

    PubMed

    Kang, Chuan-Zhi; Wang, Qing-Qing; Zhou, Tao; Jiang, Wei-Ke; Xiao, Cheng-Hong; Xie, Yu

    2014-05-01

    To study the ecological suitability regionalization of Eucommia ulmoides, for selecting artificial planting base and high-quality industrial raw material purchase area of the herb in Guizhou. Based on the investigation of 14 Eucommia ulmoides producing areas, pinoresinol diglucoside content and ecological factors were obtained. Using spatial analysis method to carry on ecological suitability regionalization. Meanwhile, combining pinoresinol diglucoside content, the correlation of major active components and environmental factors were analyzed by statistical analysis. The most suitability planting area of Eucommia ulmoides was the northwest of Guizhou. The distribution of Eucommia ulmoides was mainly affected by the type and pH value of soil, and monthly precipitation. The spatial structure of major active components in Eucommia ulmoides were randomly distributed in global space, but had only one aggregation point which had a high positive correlation in local space. The major active components of Eucommia ulmoides had no correlation with altitude, longitude or latitude. Using the spatial analysis method and statistical analysis method, based on environmental factor and pinoresinol diglucoside content, the ecological suitability regionalization of Eucommia ulmoides can provide reference for the selection of suitable planting area, artificial planting base and directing production layout.

  5. Experimental Researches on the Durability Indicators and the Physiological Comfort of Fabrics using the Principal Component Analysis (PCA) Method

    NASA Astrophysics Data System (ADS)

    Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.

    2017-06-01

    The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.

  6. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  7. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  8. Distributed optical fiber vibration sensor based on spectrum analysis of Polarization-OTDR system.

    PubMed

    Zhang, Ziyi; Bao, Xiaoyi

    2008-07-07

    A fully distributed optical fiber vibration sensor is demonstrated based on spectrum analysis of Polarization-OTDR system. Without performing any data averaging, vibration disturbances up to 5 kHz is successfully demonstrated in a 1km fiber link with 10m spatial resolution. The FFT is performed at each spatial resolution; the relation of the disturbance at each frequency component versus location allows detection of multiple events simultaneously with different and the same frequency components.

  9. Case for Deploying Complex Systems Utilizing Commodity Components

    NASA Technical Reports Server (NTRS)

    Bryant, Barry S.; Pitts, R. Lee; Ritter, George

    2003-01-01

    This viewgraph representation presents a study of the transition of computer networks and software engineering at the Huntsville Operations Support Center (HOSC) from a client/server UNIX based system to a client/server system based on commodity priced and open system components. Topics covered include: an overview of HOSC ground support systems, an analysis for changes to the existing ground support system, an analysis of options considered for the transition to a new system, and a consideration of goals for a new system.

  10. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  11. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  12. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    PubMed

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  13. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    PubMed Central

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-01-01

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577

  14. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  15. Real-time detection of organic contamination events in water distribution systems by principal components analysis of ultraviolet spectral data.

    PubMed

    Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2017-05-01

    The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.

  16. 14 CFR 35.42 - Components of the propeller control system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Components of the propeller control system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: PROPELLERS Tests and Inspections § 35.42 Components of the propeller control system. The applicant must demonstrate by tests, analysis based on tests, or service...

  17. 14 CFR 35.42 - Components of the propeller control system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Components of the propeller control system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: PROPELLERS Tests and Inspections § 35.42 Components of the propeller control system. The applicant must demonstrate by tests, analysis based on tests, or service...

  18. 14 CFR 35.42 - Components of the propeller control system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Components of the propeller control system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: PROPELLERS Tests and Inspections § 35.42 Components of the propeller control system. The applicant must demonstrate by tests, analysis based on tests, or service...

  19. 14 CFR 35.42 - Components of the propeller control system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Components of the propeller control system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: PROPELLERS Tests and Inspections § 35.42 Components of the propeller control system. The applicant must demonstrate by tests, analysis based on tests, or service...

  20. 14 CFR 35.42 - Components of the propeller control system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Components of the propeller control system... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: PROPELLERS Tests and Inspections § 35.42 Components of the propeller control system. The applicant must demonstrate by tests, analysis based on tests, or service...

  1. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  2. Intelligent data analysis to interpret major risk factors for diabetic patients with and without ischemic stroke in a small population

    PubMed Central

    Gürgen, Fikret; Gürgen, Nurgül

    2003-01-01

    This study proposes an intelligent data analysis approach to investigate and interpret the distinctive factors of diabetes mellitus patients with and without ischemic (non-embolic type) stroke in a small population. The database consists of a total of 16 features collected from 44 diabetic patients. Features include age, gender, duration of diabetes, cholesterol, high density lipoprotein, triglyceride levels, neuropathy, nephropathy, retinopathy, peripheral vascular disease, myocardial infarction rate, glucose level, medication and blood pressure. Metric and non-metric features are distinguished. First, the mean and covariance of the data are estimated and the correlated components are observed. Second, major components are extracted by principal component analysis. Finally, as common examples of local and global classification approach, a k-nearest neighbor and a high-degree polynomial classifier such as multilayer perceptron are employed for classification with all the components and major components case. Macrovascular changes emerged as the principal distinctive factors of ischemic-stroke in diabetes mellitus. Microvascular changes were generally ineffective discriminators. Recommendations were made according to the rules of evidence-based medicine. Briefly, this case study, based on a small population, supports theories of stroke in diabetes mellitus patients and also concludes that the use of intelligent data analysis improves personalized preventive intervention. PMID:12685939

  3. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  4. Advancing individual tree biomass prediction: assessment and alternatives to the component ratio method

    Treesearch

    Aaron Weiskittel; Jereme Frank; David Walker; Phil Radtke; David Macfarlane; James Westfall

    2015-01-01

    Prediction of forest biomass and carbon is becoming important issues in the United States. However, estimating forest biomass and carbon is difficult and relies on empirically-derived regression equations. Based on recent findings from a national gap analysis and comprehensive assessment of the USDA Forest Service Forest Inventory and Analysis (USFS-FIA) component...

  5. Analysis of Component of Aggression in the Stories of Elementary School Aggressive Children

    ERIC Educational Resources Information Center

    Chamandar, Fateme; Jabbari, D. Susan

    2017-01-01

    The purpose of this study is the content analysis of children's stories based on the components of aggression. Participants are 66 elementary school students (16 girls and 50 boys) selected from fourth and fifth grades, using the Relational and Overt Aggression Questionnaire; completed by the teachers. Draw a Story Test (Silver, 2005) is…

  6. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  7. Respiratory protective device design using control system techniques

    NASA Technical Reports Server (NTRS)

    Burgess, W. A.; Yankovich, D.

    1972-01-01

    The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.

  8. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. [The application of the multidimensional statistical methods in the evaluation of the influence of atmospheric pollution on the population's health].

    PubMed

    Surzhikov, V D; Surzhikov, D V

    2014-01-01

    The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.

  10. Derivative component analysis for mass spectral serum proteomic profiles.

    PubMed

    Han, Henry

    2014-01-01

    As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based diagnosis. Our findings demonstrate the feasibility and power of the proposed DCA-based profile biomarker diagnosis in achieving high sensitivity and conquering the data reproducibility issue in serum proteomics. Furthermore, our proposed derivative component analysis suggests the subtle data characteristics gleaning and de-noising are essential in separating true signals from red herrings for high-dimensional proteomic profiles, which can be more important than the conventional feature selection or dimension reduction. In particular, our profile biomarker diagnosis can be generalized to other omics data for derivative component analysis (DCA)'s nature of generic data analysis.

  11. [Quality evaluation of rhubarb dispensing granules based on multi-component simultaneous quantitative analysis and bioassay].

    PubMed

    Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He

    2017-07-01

    This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.

  12. Space-time latent component modeling of geo-referenced health data.

    PubMed

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  13. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    PubMed

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  14. Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.

    PubMed

    Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad

    2014-01-01

    Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.

  15. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    NASA Astrophysics Data System (ADS)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  16. Constrained independent component analysis approach to nonobtrusive pulse rate measurements.

    PubMed

    Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  17. Data Base Reexamination as Part of IDS Secondary Analysis.

    ERIC Educational Resources Information Center

    Curry, Blair H.; And Others

    Data reexamination is a critical component for any study. The complexity of the study, the time available for data base development and analysis, and the relationship of the study to educational policy-making can all increase the criticality of such reexamination. Analysis of the error levels in the National Institute of Education's Instructional…

  18. Hydrodynamic design of generic pump components

    NASA Technical Reports Server (NTRS)

    Eastland, A. H. J.; Dodson, H. C.

    1991-01-01

    Inducer and impellar base geometries were defined for a fuel pump for a generic generator cycle. Blade surface data and inlet flowfield definition are available in sufficient detail to allow computational fluid dynamic analysis of the two components.

  19. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.

    PubMed

    Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.

  20. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run

    PubMed Central

    Armeanu, Daniel; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100

  1. EFICAz2: enzyme function inference by a combined approach enhanced by machine learning.

    PubMed

    Arakaki, Adrian K; Huang, Ying; Skolnick, Jeffrey

    2009-04-13

    We previously developed EFICAz, an enzyme function inference approach that combines predictions from non-completely overlapping component methods. Two of the four components in the original EFICAz are based on the detection of functionally discriminating residues (FDRs). FDRs distinguish between member of an enzyme family that are homofunctional (classified under the EC number of interest) or heterofunctional (annotated with another EC number or lacking enzymatic activity). Each of the two FDR-based components is associated to one of two specific kinds of enzyme families. EFICAz exhibits high precision performance, except when the maximal test to training sequence identity (MTTSI) is lower than 30%. To improve EFICAz's performance in this regime, we: i) increased the number of predictive components and ii) took advantage of consensual information from the different components to make the final EC number assignment. We have developed two new EFICAz components, analogs to the two FDR-based components, where the discrimination between homo and heterofunctional members is based on the evaluation, via Support Vector Machine models, of all the aligned positions between the query sequence and the multiple sequence alignments associated to the enzyme families. Benchmark results indicate that: i) the new SVM-based components outperform their FDR-based counterparts, and ii) both SVM-based and FDR-based components generate unique predictions. We developed classification tree models to optimally combine the results from the six EFICAz components into a final EC number prediction. The new implementation of our approach, EFICAz2, exhibits a highly improved prediction precision at MTTSI < 30% compared to the original EFICAz, with only a slight decrease in prediction recall. A comparative analysis of enzyme function annotation of the human proteome by EFICAz2 and KEGG shows that: i) when both sources make EC number assignments for the same protein sequence, the assignments tend to be consistent and ii) EFICAz2 generates considerably more unique assignments than KEGG. Performance benchmarks and the comparison with KEGG demonstrate that EFICAz2 is a powerful and precise tool for enzyme function annotation, with multiple applications in genome analysis and metabolic pathway reconstruction. The EFICAz2 web service is available at: http://cssb.biology.gatech.edu/skolnick/webservice/EFICAz2/index.html.

  2. Bioactive components on immuno-enhancement effects in the traditional Chinese medicine Shenqi Fuzheng Injection based on relevance analysis between chemical HPLC fingerprints and in vivo biological effects.

    PubMed

    Wang, Jinxu; Tong, Xin; Li, Peibo; Liu, Menghua; Peng, Wei; Cao, Hui; Su, Weiwei

    2014-08-08

    Shenqi Fuzheng Injection (SFI) is an injectable traditional Chinese herbal formula comprised of two Chinese herbs, Radix codonopsis and Radix astragali, which were commonly used to improve immune functions against chronic diseases in an integrative and holistic way in China and other East Asian countries for thousands of years. This present study was designed to explore the bioactive components on immuno-enhancement effects in SFI using the relevance analysis between chemical fingerprints and biological effects in vivo. According to a four-factor, nine-level uniform design, SFI samples were prepared with different proportions of the four portions separated from SFI via high speed counter current chromatography (HSCCC). SFI samples were assessed with high performance liquid chromatography (HPLC) for 23 identified components. For the immunosuppressed murine experiments, biological effects in vivo were evaluated on spleen index (E1), peripheral white blood cell counts (E2), bone marrow cell counts (E3), splenic lymphocyte proliferation (E4), splenic natural killer cell activity (E5), peritoneal macrophage phagocytosis (E6) and the amount of interleukin-2 (E7). Based on the hypothesis that biological effects in vivo varied with differences in components, multivariate relevance analysis, including gray relational analysis (GRA), multi-linear regression analysis (MLRA) and principal component analysis (PCA), were performed to evaluate the contribution of each identified component. The results indicated that the bioactive components of SFI on immuno-enhancement activities were calycosin-7-O-β-d-glucopyranoside (P9), isomucronulatol-7,2'-di-O-glucoside (P11), biochanin-7-glucoside (P12), 9,10-dimethoxypterocarpan-3-O-xylosylglucoside (P15) and astragaloside IV (P20), which might have positive effects on spleen index (E1), splenic lymphocyte proliferation (E4), splenic natural killer cell activity (E5), peritoneal macrophage phagocytosis (E6) and the amount of interleukin-2 (E7), while 5-hydroxymethyl-furaldehyde (P5) and lobetyolin (P13) might have negative effects on E1, E4, E5, E6 and E7. Finally, the bioactive HPLC fingerprint of SFI based on its bioactive components on immuno-enhancement effects was established for quality control of SFI. In summary, this study provided a perspective to explore the bioactive components in a traditional Chinese herbal formula with a series of HPLC and animal experiments, which would be helpful to improve quality control and inspire further clinical studies of traditional Chinese medicines. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  4. Lattice Independent Component Analysis for Mobile Robot Localization

    NASA Astrophysics Data System (ADS)

    Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz

    This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).

  5. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    PubMed

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Shape-memory-alloy-based smart knee spacer for total knee arthroplasty: 3D CAD modelling and a computational study.

    PubMed

    Gautam, Arvind; Callejas, Miguel A; Acharyya, Amit; Acharyya, Swati Ghosh

    2018-05-01

    This study introduced a shape memory alloy (SMA)-based smart knee spacer for total knee arthroplasty (TKA). Subsequently, a 3D CAD model of a smart tibial component of TKA was designed in Solidworks software, and verified using a finite element analysis in ANSYS Workbench. The two major properties of the SMA (NiTi), the pseudoelasticity (PE) and shape memory effect (SME), were exploited, modelled, and analysed for a TKA application. The effectiveness of the proposed model was verified in ANSYS Workbench through the finite element analysis (FEA) of the maximum deformation and equivalent (von Mises) stress distribution. The proposed model was also compared with a polymethylmethacrylate (PMMA)-based spacer for the upper portion of the tibial component for three subjects with body mass index (BMI) of 23.88, 31.09, and 38.39. The proposed SMA -based smart knee spacer contained 96.66978% less deformation with a standard deviation of 0.01738 than that of the corresponding PMMA based counterpart for the same load and flexion angle. Based on the maximum deformation analysis, the PMMA-based spacer had 30 times more permanent deformation than that of the proposed SMA-based spacer for the same load and flexion angle. The SME property of the lower portion of the tibial component for fixation of the spacer at its position was verified by an FEA in ANSYS. Wherein, a strain life-based fatigue analysis was performed and tested for the PE and SME built spacers through the FEA. Therefore, the SMA-based smart knee spacer eliminated the drawbacks of the PMMA-based spacer, including spacer fracture, loosening, dislocation, tilting or translation, and knee subluxation. Copyright © 2018. Published by Elsevier Ltd.

  7. Effects of cumulative illness severity on hippocampal gray matter volume in major depression: a voxel-based morphometry study.

    PubMed

    Zaremba, Dario; Enneking, Verena; Meinert, Susanne; Förster, Katharina; Bürger, Christian; Dohm, Katharina; Grotegerd, Dominik; Redlich, Ronny; Dietsche, Bruno; Krug, Axel; Kircher, Tilo; Kugel, Harald; Heindel, Walter; Baune, Bernhard T; Arolt, Volker; Dannlowski, Udo

    2018-02-08

    Patients with major depression show reduced hippocampal volume compared to healthy controls. However, the contribution of patients' cumulative illness severity to hippocampal volume has rarely been investigated. It was the aim of our study to find a composite score of cumulative illness severity that is associated with hippocampal volume in depression. We estimated hippocampal gray matter volume using 3-tesla brain magnetic resonance imaging in 213 inpatients with acute major depression according to DSM-IV criteria (employing the SCID interview) and 213 healthy controls. Patients' cumulative illness severity was ascertained by six clinical variables via structured clinical interviews. A principal component analysis was conducted to identify components reflecting cumulative illness severity. Regression analyses and a voxel-based morphometry approach were used to investigate the influence of patients' individual component scores on hippocampal volume. Principal component analysis yielded two main components of cumulative illness severity: Hospitalization and Duration of Illness. While the component Hospitalization incorporated information from the intensity of inpatient treatment, the component Duration of Illness was based on the duration and frequency of illness episodes. We could demonstrate a significant inverse association of patients' Hospitalization component scores with bilateral hippocampal gray matter volume. This relationship was not found for Duration of Illness component scores. Variables associated with patients' history of psychiatric hospitalization seem to be accurate predictors of hippocampal volume in major depression and reliable estimators of patients' cumulative illness severity. Future studies should pay attention to these measures when investigating hippocampal volume changes in major depression.

  8. [Discrimination of Red Tide algae by fluorescence spectra and principle component analysis].

    PubMed

    Su, Rong-guo; Hu, Xu-peng; Zhang, Chuan-song; Wang, Xiu-lin

    2007-07-01

    Fluorescence discrimination technology for 11 species of the Red Tide algae at genus level was constructed by principle component analysis and non-negative least squares. Rayleigh and Raman scattering peaks of 3D fluorescence spectra were eliminated by Delaunay triangulation method. According to the results of Fisher linear discrimination, the first principle component score and the second component score of 3D fluorescence spectra were chosen as discriminant feature and the feature base was established. The 11 algae species were tested, and more than 85% samples were accurately determinated, especially for Prorocentrum donghaiense, Skeletonema costatum, Gymnodinium sp., which have frequently brought Red tide in the East China Sea. More than 95% samples were right discriminated. The results showed that the genus discriminant feature of 3D fluorescence spectra of Red Tide algae given by principle component analysis could work well.

  9. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  10. Incremental Principal Component Analysis Based Outlier Detection Methods for Spatiotemporal Data Streams

    NASA Astrophysics Data System (ADS)

    Bhushan, A.; Sharker, M. H.; Karimi, H. A.

    2015-07-01

    In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  11. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    PubMed

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  12. Characterization of Aroma-Active Components and Antioxidant Activity Analysis of E-jiao (Colla Corii Asini) from Different Geographical Origins.

    PubMed

    Zhang, Shan; Xu, Lu; Liu, Yang-Xi; Fu, Hai-Yan; Xiao, Zuo-Bing; She, Yuan-Bin

    2018-04-01

    E-jiao (Colla Corii Asini, CCA) has been widely used as a healthy food and Chinese medicine. Although authentic CCA is characterized by its typical sweet and neutral fragrance, its aroma components have been rarely investigated. This work investigated the aroma-active components and antioxidant activity of 19 CCAs from different geographical origins. CCA extracts obtained by simultaneous distillation and extraction were analyzed by gas chromatography-mass spectrometry (GC-MS), gas chromatography-olfactometry (GC-O) and sensory analysis. The antioxidant activity of CCAs was determined by ABTS and DPPH assays. A total of 65 volatile compounds were identified and quantified by GC-MS and 23 aroma-active compounds were identified by GC-O and aroma extract dilution analysis. The most powerful aroma-active compounds were identified based on the flavor dilution factor and their contents were compared among the 19 CCAs. Principal component analysis of the 23 aroma-active components showed 3 significant clusters. Canonical correlation analysis between antioxidant assays and the 23 aroma-active compounds indicates strong correlation (r = 0.9776, p = 0.0281). Analysis of aroma-active components shows potential for quality evaluation and discrimination of CCAs from different geographical origins.

  13. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  14. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  15. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  16. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  17. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  18. An Integrated Theory for Predicting the Hydrothermomechanical Response of Advanced Composite Structural Components

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Lark, R. F.; Sinclair, J. H.

    1977-01-01

    An integrated theory is developed for predicting the hydrothermomechanical (HDTM) response of fiber composite components. The integrated theory is based on a combined theoretical and experimental investigation. In addition to predicting the HDTM response of components, the theory is structured to assess the combined hydrothermal effects on the mechanical properties of unidirectional composites loaded along the material axis and off-axis, and those of angleplied laminates. The theory developed predicts values which are in good agreement with measured data at the micromechanics, macromechanics, laminate analysis and structural analysis levels.

  19. Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design.

    PubMed

    Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G

    2012-01-01

    Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.

  20. Specialized data analysis of SSME and advanced propulsion system vibration measurements

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi

    1993-01-01

    The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.

  1. Concurrent white matter bundles and grey matter networks using independent component analysis.

    PubMed

    O'Muircheartaigh, Jonathan; Jbabdi, Saad

    2018-04-15

    Developments in non-invasive diffusion MRI tractography techniques have permitted the investigation of both the anatomy of white matter pathways connecting grey matter regions and their structural integrity. In parallel, there has been an expansion in automated techniques aimed at parcellating grey matter into distinct regions based on functional imaging. Here we apply independent component analysis to whole-brain tractography data to automatically extract brain networks based on their associated white matter pathways. This method decomposes the tractography data into components that consist of paired grey matter 'nodes' and white matter 'edges', and automatically separates major white matter bundles, including known cortico-cortical and cortico-subcortical tracts. We show how this framework can be used to investigate individual variations in brain networks (in terms of both nodes and edges) as well as their associations with individual differences in behaviour and anatomy. Finally, we investigate correspondences between tractography-based brain components and several canonical resting-state networks derived from functional MRI. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  3. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  4. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  5. Experimental design based 3-D QSAR analysis of steroid-protein interactions: Application to human CBG complexes

    NASA Astrophysics Data System (ADS)

    Norinder, Ulf

    1990-12-01

    An experimental design based 3-D QSAR analysis using a combination of principal component and PLS analysis is presented and applied to human corticosteroid-binding globulin complexes. The predictive capability of the created model is good. The technique can also be used as guidance when selecting new compounds to be investigated.

  6. An innovative approach for characteristic analysis and state-of-health diagnosis for a Li-ion cell based on the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Cho, B. H.

    2014-08-01

    This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.

  7. A Cost-Utility Model of Care for Peristomal Skin Complications

    PubMed Central

    Inglese, Gary; Manson, Andrea; Townshend, Arden

    2016-01-01

    PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166

  8. Ad hoc Laser networks component technology for modular spacecraft

    NASA Astrophysics Data System (ADS)

    Huang, Xiujun; Shi, Dele; Ma, Zongfeng; Shen, Jingshi

    2016-03-01

    Distributed reconfigurable satellite is a new kind of spacecraft system, which is based on a flexible platform of modularization and standardization. Based on the module data flow analysis of the spacecraft, this paper proposes a network component of ad hoc Laser networks architecture. Low speed control network with high speed load network of Microwave-Laser communication mode, no mesh network mode, to improve the flexibility of the network. Ad hoc Laser networks component technology was developed, and carried out the related performance testing and experiment. The results showed that ad hoc Laser networks components can meet the demand of future networking between the module of spacecraft.

  9. Ad hoc laser networks component technology for modular spacecraft

    NASA Astrophysics Data System (ADS)

    Huang, Xiujun; Shi, Dele; Shen, Jingshi

    2017-10-01

    Distributed reconfigurable satellite is a new kind of spacecraft system, which is based on a flexible platform of modularization and standardization. Based on the module data flow analysis of the spacecraft, this paper proposes a network component of ad hoc Laser networks architecture. Low speed control network with high speed load network of Microwave-Laser communication mode, no mesh network mode, to improve the flexibility of the network. Ad hoc Laser networks component technology was developed, and carried out the related performance testing and experiment. The results showed that ad hoc Laser networks components can meet the demand of future networking between the module of spacecraft.

  10. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    NASA Astrophysics Data System (ADS)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  11. The Construction of Job Families Based on Company Specific PAQ Job Dimensions.

    ERIC Educational Resources Information Center

    Taylor, L. R.; Colbert, G. A.

    1978-01-01

    Research is presented on the construction of job families based on Position Analysis Questionnaire data. The data were subjected to a component analysis. Results were interpreted as sufficiently encouraging to proceed with analyses of validity generalization within the job families. (Editor/RK)

  12. Analysis and improvement measures of flight delay in China

    NASA Astrophysics Data System (ADS)

    Zang, Yuhang

    2017-03-01

    Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.

  13. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A guide to understanding meta-analysis.

    PubMed

    Israel, Heidi; Richter, Randy R

    2011-07-01

    With the focus on evidence-based practice in healthcare, a well-conducted systematic review that includes a meta-analysis where indicated represents a high level of evidence for treatment effectiveness. The purpose of this commentary is to assist clinicians in understanding meta-analysis as a statistical tool using both published articles and explanations of components of the technique. We describe what meta-analysis is, what heterogeneity is, and how it affects meta-analysis, effect size, the modeling techniques of meta-analysis, and strengths and weaknesses of meta-analysis. Common components like forest plot interpretation, software that may be used, special cases for meta-analysis, such as subgroup analysis, individual patient data, and meta-regression, and a discussion of criticisms, are included.

  15. Principal Component Analysis Based Measure of Structural Holes

    NASA Astrophysics Data System (ADS)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  16. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  17. SYNCSA--R tool for analysis of metacommunities based on functional traits and phylogeny of the community components.

    PubMed

    Debastiani, Vanderlei J; Pillar, Valério D

    2012-08-01

    SYNCSA is an R package for the analysis of metacommunities based on functional traits and phylogeny of the community components. It offers tools to calculate several matrix correlations that express trait-convergence assembly patterns, trait-divergence assembly patterns and phylogenetic signal in functional traits at the species pool level and at the metacommunity level. SYNCSA is a package for the R environment, under a GPL-2 open-source license and freely available on CRAN official web server for R (http://cran.r-project.org). vanderleidebastiani@yahoo.com.br.

  18. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  19. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  20. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  1. Artifact suppression and analysis of brain activities with electroencephalography signals.

    PubMed

    Rashed-Al-Mahfuz, Md; Islam, Md Rabiul; Hirose, Keikichi; Molla, Md Khademul Islam

    2013-06-05

    Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.

  2. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  3. Transient analysis mode participation for modal survey target mode selection using MSC/NASTRAN DMAP

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Ibrahim, Omar M.; Sullivan, Timothy L.; Goodnight, Thomas W.

    1994-01-01

    Many methods have been developed to aid analysts in identifying component modes which contribute significantly to component responses. These modes, typically targeted for dynamic model correlation via a modal survey, are known as target modes. Most methods used to identify target modes are based on component global dynamic behavior. It is sometimes unclear if these methods identify all modes contributing to responses important to the analyst. These responses are usually those in areas of hardware design concerns. One method used to check the completeness of target mode sets and identify modes contributing significantly to important component responses is mode participation. With this method, the participation of component modes in dynamic responses is quantified. Those modes which have high participation are likely modal survey target modes. Mode participation is most beneficial when it is used with responses from analyses simulating actual flight events. For spacecraft, these responses are generated via a structural dynamic coupled loads analysis. Using MSC/NASTRAN DMAP, a method has been developed for calculating mode participation based on transient coupled loads analysis results. The algorithm has been implemented to be compatible with an existing coupled loads methodology and has been used successfully to develop a set of modal survey target modes.

  4. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    NASA Astrophysics Data System (ADS)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.

  5. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  6. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  7. Shape optimization of three-dimensional stamped and solid automotive components

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.; Yang, R.-J.; Bennett, J. A.

    1987-01-01

    The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.

  8. A component analysis of positive behaviour support plans.

    PubMed

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  9. 40 CFR 89.118 - Deterioration factors and service accumulation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., subsystems, or components selected by the manufacturer under § 89.117(d). The manufacturer shall describe the... must be based on good engineering judgment. (iii) Engineering analysis for established technologies. (A) In the case where an engine family uses established technology, an analysis based on good engineering...

  10. 40 CFR 89.118 - Deterioration factors and service accumulation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., subsystems, or components selected by the manufacturer under § 89.117(d). The manufacturer shall describe the... must be based on good engineering judgment. (iii) Engineering analysis for established technologies. (A) In the case where an engine family uses established technology, an analysis based on good engineering...

  11. 40 CFR 89.118 - Deterioration factors and service accumulation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., subsystems, or components selected by the manufacturer under § 89.117(d). The manufacturer shall describe the... must be based on good engineering judgment. (iii) Engineering analysis for established technologies. (A) In the case where an engine family uses established technology, an analysis based on good engineering...

  12. 40 CFR 89.118 - Deterioration factors and service accumulation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., subsystems, or components selected by the manufacturer under § 89.117(d). The manufacturer shall describe the... must be based on good engineering judgment. (iii) Engineering analysis for established technologies. (A) In the case where an engine family uses established technology, an analysis based on good engineering...

  13. 40 CFR 89.118 - Deterioration factors and service accumulation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., subsystems, or components selected by the manufacturer under § 89.117(d). The manufacturer shall describe the... must be based on good engineering judgment. (iii) Engineering analysis for established technologies. (A) In the case where an engine family uses established technology, an analysis based on good engineering...

  14. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  15. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  16. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  17. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  18. [Preliminary study on effective components of Tripterygium wilfordii for liver toxicity based on spectrum-effect correlation analysis].

    PubMed

    Zhao, Xiao-Mei; Pu, Shi-Biao; Zhao, Qing-Guo; Gong, Man; Wang, Jia-Bo; Ma, Zhi-Jie; Xiao, Xiao-He; Zhao, Kui-Jun

    2016-08-01

    In this paper, the spectrum-effect correlation analysis method was used to explore the main effective components of Tripterygium wilfordii for liver toxicity, and provide reference for promoting the quality control of T. wilfordii. Chinese medicine T.wilfordii was taken as the study object, and LC-Q-TOF-MS was used to characterize the chemical components in T. wilfordii samples from different areas, and their main components were initially identified after referring to the literature. With the normal human hepatocytes (LO2 cell line)as the carrier, acetaminophen as positive medicine, and cell inhibition rate as testing index, the simple correlation analysis and multivariate linear correlation analysis methods were used to screen the main components of T. wilfordii for liver toxicity. As a result, 10 kinds of main components were identified, and the spectrum-effect correlation analysis showed that triptolide may be the toxic component, which was consistent with previous results of traditional literature. Meanwhile it was found that tripterine and demethylzeylasteral may greatly contribute to liver toxicity in multivariate linear correlation analysis. T. wilfordii samples of different varieties or different origins showed large difference in quality, and the T. wilfordii from southwest China showed lower liver toxicity, while those from Hunan and Anhui province showed higher liver toxicity. This study will provide data support for further rational use of T. wilfordii and research on its liver toxicity ingredients. Copyright© by the Chinese Pharmaceutical Association.

  19. Genetic association of impulsivity in young adults: a multivariate study

    PubMed Central

    Khadka, S; Narayanan, B; Meda, S A; Gelernter, J; Han, S; Sawyer, B; Aslanzadeh, F; Stevens, M C; Hawkins, K A; Anticevic, A; Potenza, M N; Pearlson, G D

    2014-01-01

    Impulsivity is a heritable, multifaceted construct with clinically relevant links to multiple psychopathologies. We assessed impulsivity in young adult (N~2100) participants in a longitudinal study, using self-report questionnaires and computer-based behavioral tasks. Analysis was restricted to the subset (N=426) who underwent genotyping. Multivariate association between impulsivity measures and single-nucleotide polymorphism data was implemented using parallel independent component analysis (Para-ICA). Pathways associated with multiple genes in components that correlated significantly with impulsivity phenotypes were then identified using a pathway enrichment analysis. Para-ICA revealed two significantly correlated genotype–phenotype component pairs. One impulsivity component included the reward responsiveness subscale and behavioral inhibition scale of the Behavioral-Inhibition System/Behavioral-Activation System scale, and the second impulsivity component included the non-planning subscale of the Barratt Impulsiveness Scale and the Experiential Discounting Task. Pathway analysis identified processes related to neurogenesis, nervous system signal generation/amplification, neurotransmission and immune response. We identified various genes and gene regulatory pathways associated with empirically derived impulsivity components. Our study suggests that gene networks implicated previously in brain development, neurotransmission and immune response are related to impulsive tendencies and behaviors. PMID:25268255

  20. Application of Sensory Evaluation, HS-SPME GC-MS, E-Nose, and E-Tongue for Quality Detection in Citrus Fruits.

    PubMed

    Qiu, Shanshan; Wang, Jun

    2015-10-01

    In this study, electronic tongue (E-tongue), headspace solid-phase microextraction gas chromatography-mass spectrometer (GC-MS), electronic nose (E-nose), and quantitative describe analysis (QDA) were applied to describe the 2 types of citrus fruits (Satsuma mandarins [Citrus unshiu Marc.] and sweet oranges [Citrus sinensis {L.} Osbeck]) and their mixing juices systematically and comprehensively. As some aroma components or some flavor molecules interacted with the whole juice matrix, the changes of most components in the fruit juice were not in proportion to the mixing ratio of the 2 citrus fruits. The potential correlations among the signals of E-tongue and E-nose, volatile components, and sensory attributes were analyzed by using analysis of variance partial least squares regression. The result showed that the variables from the sensor signals (E-tongue system and E-nose system) had significant and positive (or negative) correlations to the most variables of volatile components (GC-MS) and sensory attributes (QDA). The simultaneous utilization of E-tongue and E-nose obtained a perfect classification result with 100% accuracy rate based on linear discriminant analysis and also attained a satisfying prediction with high coefficient association for the sensory attributes (R(2) > 0.994 for training sets and R(2) > 0.983 for testing sets) and for the volatile components (R(2) > 0.992 for training sets and R(2) > 0.990 for testing sets) based on random forest. Being easy-to-use, cost-effective, robust, and capable of providing a fast analysis procedure, E-nose and E-tongue could be used as an alternative detection system to traditional analysis methods, such as GC-MS and sensory evaluation by human panel in the fruit industry. Being easy-to-use, cost-effective, robust, and capable of providing a fast analysis procedure, E-nose and E-tongue could be used as an alternative detection system to traditional analysis methods for characterizing food flavors. Based on those results, one can draw a conclusion that the fusion system composed of E-tongue and E-nose could guarantee a satisfying result in the prediction of sensory attributes and volatile components for fruit quality profile. © 2015 Institute of Food Technologists®

  1. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  2. Multi-component separation and analysis of bat echolocation calls.

    PubMed

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  3. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  4. A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations

    DOE PAGES

    Guo, Yi; Parsons, Tyler; Dykes, Katherine; ...

    2016-08-24

    This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less

  5. A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Parsons, Tyler; Dykes, Katherine

    This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less

  6. Artifacts and noise removal in electrocardiograms using independent component analysis.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-26

    Independent component analysis (ICA) is a novel technique capable of separating independent components from electrocardiogram (ECG) complex signals. The purpose of this analysis is to evaluate the effectiveness of ICA in removing artifacts and noise from ECG recordings. ICA is applied to remove artifacts and noise in ECG segments of either an individual ECG CSE data base file or all files. The reconstructed ECGs are compared with the original ECG signal. For the four special cases discussed, the R-Peak magnitudes of the CSE data base ECG waveforms before and after applying ICA are also found. In the results, it is shown that in most of the cases, the percentage error in reconstruction is very small. The results show that there is a significant improvement in signal quality, i.e. SNR. All the ECG recording cases dealt showed an improved ECG appearance after the use of ICA. This establishes the efficacy of ICA in elimination of noise and artifacts in electrocardiograms.

  7. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  8. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.

  9. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  10. [Determination of the Plant Origin of Licorice Oil Extract, a Natural Food Additive, by Principal Component Analysis Based on Chemical Components].

    PubMed

    Tada, Atsuko; Ishizuki, Kyoko; Sugimoto, Naoki; Yoshimatsu, Kayo; Kawahara, Nobuo; Suematsu, Takako; Arifuku, Kazunori; Fukai, Toshio; Tamura, Yukiyoshi; Ohtsuki, Takashi; Tahara, Maiko; Yamazaki, Takeshi; Akiyama, Hiroshi

    2015-01-01

    "Licorice oil extract" (LOE) (antioxidant agent) is described in the notice of Japanese food additive regulations as a material obtained from the roots and/or rhizomes of Glycyrrhiza uralensis, G. inflata or G. glabra. In this study, we aimed to identify the original Glycyrrhiza species of eight food additive products using LC/MS. Glabridin, a characteristic compound in G. glabra, was specifically detected in seven products, and licochalcone A, a characteristic compound in G. inflata, was detected in one product. In addition, Principal Component Analysis (PCA) (a kind of multivariate analysis) using the data of LC/MS or (1)H-NMR analysis was performed. The data of thirty-one samples, including LOE products used as food additives, ethanol extracts of various Glycyrrhiza species and commercially available Glycyrrhiza species-derived products were assessed. Based on the PCA results, the majority of LOE products was confirmed to be derived from G. glabra. This study suggests that PCA using (1)H-NMR analysis data is a simple and useful method to identify the plant species of origin of natural food additive products.

  11. Tidal analysis of surface currents in the Porsanger fjord in northern Norway

    NASA Astrophysics Data System (ADS)

    Stramska, Malgorzata; Jankowski, Andrzej; Cieszyńska, Agata

    2016-04-01

    In this presentation we describe surface currents in the Porsanger fjord (Porsangerfjorden) located in the European Arctic in the vicinity of the Barents Sea. Our analysis is based on data collected in the summer of 2014 using High Frequency radar system. Our interest in this fjord comes from the fact that this is a region of high climatic sensitivity. One of our long-term goals is to develop an improved understanding of the undergoing changes and interactions between this fjord and the large-scale atmospheric and oceanic conditions. In order to derive a better understanding of the ongoing changes one must first improve the knowledge about the physical processes that create the environment of the fjord. The present study is the first step in this direction. Our main objective in this presentation is to evaluate the importance of tidal forcing. Tides in the Porsanger fjord are substantial, with tidal range on the order of about 3 meters. Tidal analysis attributes to tides about 99% of variance in sea level time series recorded in Honningsvåg. The most important tidal component based on sea level data is the M2 component (amplitude of ~90 cm). The S2 and N2 components (amplitude of ~ 20 cm) also play a significant role in the semidiurnal sea level oscillations. The most important diurnal component is K1 with amplitude of about 8 cm. Tidal analysis lead us to the conclusion that the most important tidal component in observed surface currents is also the M2 component. The second most important component is the S2 component. Our results indicate that in contrast to sea level, only about 10 - 20% of variance in surface currents can be attributed to tidal currents. This means that about 80-90% of variance can be credited to wind-induced and geostrophic currents. This work was funded by the Norway Grants (NCBR contract No. 201985, project NORDFLUX). Partial support for MS comes from the Institute of Oceanology (IO PAN).

  12. USARCENT AOR Contingency Base Waste Stream Analysis: An Analysis of Solid Waste Streams at Five Bases in the U. S. Army Central (USARCENT) Area of Responsibility

    DTIC Science & Technology

    2013-03-31

    certainly remain comingled with other solid waste. For example, some bases provided containers for segregation of recyclables including plastic and...prevalent types of solid waste are food (19.1% by average sample weight), wood (18.9%), and plastics (16.0%) based on analysis of bases in...within the interval shown. Food and wood wastes are the largest components of the average waste stream (both at ~19% by weight), followed by plastic

  13. DMT-TAFM: a data mining tool for technical analysis of futures market

    NASA Astrophysics Data System (ADS)

    Stepanov, Vladimir; Sathaye, Archana

    2002-03-01

    Technical analysis of financial markets describes many patterns of market behavior. For practical use, all these descriptions need to be adjusted for each particular trading session. In this paper, we develop a data mining tool for technical analysis of the futures markets (DMT-TAFM), which dynamically generates rules based on the notion of the price pattern similarity. The tool consists of three main components. The first component provides visualization of data series on a chart with different ranges, scales, and chart sizes and types. The second component constructs pattern descriptions using sets of polynomials. The third component specifies the training set for mining, defines the similarity notion, and searches for a set of similar patterns. DMT-TAFM is useful to prepare the data, and then reveal and systemize statistical information about similar patterns found in any type of historical price series. We performed experiments with our tool on three decades of trading data fro hundred types of futures. Our results for this data set shows that, we can prove or disprove many well-known patterns based on real data, as well as reveal new ones, and use the set of relatively consistent patterns found during data mining for developing better futures trading strategies.

  14. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    PubMed

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  15. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  16. Hilbert-Huang transform analysis of dynamic and earthquake motion recordings

    USGS Publications Warehouse

    Zhang, R.R.; Ma, S.; Safak, E.; Hartzell, S.

    2003-01-01

    This study examines the rationale of Hilbert-Huang transform (HHT) for analyzing dynamic and earthquake motion recordings in studies of seismology and engineering. In particular, this paper first provides the fundamentals of the HHT method, which consist of the empirical mode decomposition (EMD) and the Hilbert spectral analysis. It then uses the HHT to analyze recordings of hypothetical and real wave motion, the results of which are compared with the results obtained by the Fourier data processing technique. The analysis of the two recordings indicates that the HHT method is able to extract some motion characteristics useful in studies of seismology and engineering, which might not be exposed effectively and efficiently by Fourier data processing technique. Specifically, the study indicates that the decomposed components in EMD of HHT, namely, the intrinsic mode function (IMF) components, contain observable, physical information inherent to the original data. It also shows that the grouped IMF components, namely, the EMD-based low- and high-frequency components, can faithfully capture low-frequency pulse-like as well as high-frequency wave signals. Finally, the study illustrates that the HHT-based Hilbert spectra are able to reveal the temporal-frequency energy distribution for motion recordings precisely and clearly.

  17. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  18. Titrimetric Analysis of Han-Based Liquid Propellants

    DTIC Science & Technology

    1988-03-01

    acid-base and Karl Fischer titrimetry, procedures that quantitatively determine the three major propellant components. The method developed converts...sodium hydroxide as titrant for both HAN and TEAN. Water is determined by Karl Fischer titration using the proprietary reagent "Hydranal". Each major...water, react with one or more of the components of the Karl Fischer reagent. One of the newer Karl Fischer titrants is "Hydranal", a proprietary reagent

  19. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis.

    PubMed

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-29

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  20. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  1. Component analysis and target cell-based neuroactivity screening of Panax ginseng by ultra-performance liquid chromatography coupled with quadrupole-time-of-flight mass spectrometry.

    PubMed

    Yuan, Jinbin; Chen, Yang; Liang, Jian; Wang, Chong-Zhi; Liu, Xiaofei; Yan, Zhihong; Tang, Yi; Li, Jiankang; Yuan, Chun-Su

    2016-12-01

    Ginseng is one of the most widely used natural medicines in the world. Recent studies have suggested Panax ginseng has a wide range of beneficial effects on aging, central nervous system disorders, and neurodegenerative diseases. However, knowledge about the specific bioactive components of ginseng is still limited. This work aimed to screen for the bioactive components in Panax ginseng that act against neurodegenerative diseases, using the target cell-based bioactivity screening method. Firstly, component analysis of Panax ginseng extracts was performed by UPLC-QTOF-MS, and a total of 54 compounds in white ginseng were characterized and identified according to the retention behaviors, accurate MW, MS characteristics, parent nucleus, aglycones, side chains, and literature data. Then target cell-based bioactivity screening method was developed to predict the candidate compounds in ginseng with SH-SY5Y cells. Four ginsenosides, Rg 2 , Rh 1 , Ro, and Rd, were observed to be active. The target cell-based bioactivity screening method coupled with UPLC-QTOF-MS technique has suitable sensitivity and it can be used as a screening tool for low content bioactive constituents in natural products. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  3. Computational models for the analysis/design of hypersonic scramjet components. I - Combustor and nozzle models

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.

    1986-01-01

    An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.

  4. A preliminary structural analysis of space-base living quarters modules to verify a weight-estimating technique

    NASA Technical Reports Server (NTRS)

    Grissom, D. S.; Schneider, W. C.

    1971-01-01

    The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.

  5. Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.

    PubMed

    Zhang, Sheng; Li, Chiang-Shan R

    2017-11-01

    As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p < 10 -6 , corrected, 49% of voxels on average overlapped among subdivisions. Compared with seed-region analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.

  6. The Scenario-Based Engineering Process (SEP): a user-centered approach for the development of health care systems.

    PubMed

    Harbison, K; Kelly, J; Burnell, L; Silva, J

    1995-01-01

    The Scenario-based Engineering Process (SEP) is a user-focused methodology for large and complex system design. This process supports new application development from requirements analysis with domain models to component selection, design and modification, implementation, integration, and archival placement. It is built upon object-oriented methodologies, domain modeling strategies, and scenario-based techniques to provide an analysis process for mapping application requirements to available components. We are using SEP in the health care applications that we are developing. The process has already achieved success in the manufacturing and military domains and is being adopted by many organizations. SEP should prove viable in any domain containing scenarios that can be decomposed into tasks.

  7. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  8. Wind Turbine Control Design to Reduce Capital Costs: 7 January 2009 - 31 August 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darrow, P. J.

    2010-01-01

    This report first discusses and identifies which wind turbine components can benefit from advanced control algorithms and also presents results from a preliminary loads case analysis using a baseline controller. Next, it describes the design, implementation, and simulation-based testing of an advanced controller to reduce loads on those components. The case-by-case loads analysis and advanced controller design will help guide future control research.

  9. Parametric Analysis to Study the Influence of Aerogel-Based Renders' Components on Thermal and Mechanical Performance.

    PubMed

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-05-04

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study's objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect.

  10. Parametric Analysis to Study the Influence of Aerogel-Based Renders’ Components on Thermal and Mechanical Performance

    PubMed Central

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-01-01

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study’s objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect. PMID:28773460

  11. Comparison of three-dimensional fluorescence analysis methods for predicting formation of trihalomethanes and haloacetic acids.

    PubMed

    Peleato, Nicolás M; Andrews, Robert C

    2015-01-01

    This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.

  12. COMPUTATIONAL ANALYSIS OF SWALLOWING MECHANICS UNDERLYING IMPAIRED EPIGLOTTIC INVERSION

    PubMed Central

    Pearson, William G.; Taylor, Brandon K; Blair, Julie; Martin-Harris, Bonnie

    2015-01-01

    Objective Determine swallowing mechanics associated with the first and second epiglottic movements, that is, movement to horizontal and full inversion respectively, in order to provide a clinical interpretation of impaired epiglottic function. Study Design Retrospective cohort study. Methods A heterogeneous cohort of patients with swallowing difficulties was identified (n=92). Two speech-language pathologists reviewed 5ml thin and 5ml pudding videofluoroscopic swallow studies per subject, and assigned epiglottic component scores of 0=complete inversion, 1=partial inversion, and 2=no inversion forming three groups of videos for comparison. Coordinates mapping minimum and maximum excursion of the hyoid, pharynx, larynx, and tongue base during pharyngeal swallowing were recorded using ImageJ software. A canonical variate analysis with post-hoc discriminant function analysis of coordinates was performed using MorphoJ software to evaluate mechanical differences between groups. Eigenvectors characterizing swallowing mechanics underlying impaired epiglottic movements were visualized. Results Nineteen of 184 video-swallows were rejected for poor quality (n=165). A Goodman-Kruskal index of predictive association showed no correlation between epiglottic component scores and etiologies of dysphagia (λ=.04). A two-way analysis of variance by epiglottic component scores showed no significant interaction effects between sex and age (f=1.4, p=.25). Discriminant function analysis demonstrated statistically significant mechanical differences between epiglottic component scores: 1&2, representing the first epiglottic movement (Mahalanobis distance=1.13, p=.0007); and, 0&1, representing the second epiglottic movement (Mahalanobis distance=0.83, p=.003). Eigenvectors indicate that laryngeal elevation and tongue base retraction underlie both epiglottic movements. Conclusion Results suggest that reduced tongue base retraction and laryngeal elevation underlie impaired first and second epiglottic movements. The styloglossus, hyoglossus and long pharyngeal muscles are implicated as targets for rehabilitation in dysphagic patients with impaired epiglottic inversion. PMID:27426940

  13. Assessing agro-environmental performance of dairy farms in northwest Italy based on aggregated results from indicators.

    PubMed

    Gaudino, Stefano; Goia, Irene; Grignani, Carlo; Monaco, Stefano; Sacco, Dario

    2014-07-01

    Dairy farms control an important share of the agricultural area of Northern Italy. Zero grazing, large maize-cropped areas, high stocking densities, and high milk production make them intensive and prone to impact the environment. Currently, few published studies have proposed indicator sets able to describe the entire dairy farm system and their internal components. This work had four aims: i) to propose a list of agro-environmental indicators to assess dairy farms; ii) to understand which indicators classify farms best; iii) to evaluate the dairy farms based on the proposed indicator list; iv) to link farmer decisions to the consequent environmental pressures. Forty agro-environmental indicators selected for this study are described. Northern Italy dairy systems were analysed considering both farmer decision indicators (farm management) and the resulting pressure indicators that demonstrate environmental stress on the entire farming system, and its components: cropping system, livestock system, and milk production. The correlations among single indicators identified redundant indicators. Principal Components Analysis distinguished which indicators provided meaningful information about each pressure indicator group. Analysis of the communalities and the correlations among indicators identified those that best represented farm variability: Farm Gate N Balance, Greenhouse Gas Emission, and Net Energy of the farm system; Net Energy and Gross P Balance of the cropping system component; Energy Use Efficiency and Purchased Feed N Input of the livestock system component; N Eco-Efficiency of the milk production component. Farm evaluation, based on the complete list of selected indicators demonstrated organic farming resulted in uniformly high values, while farms with low milk-producing herds resulted in uniformly low values. Yet on other farms, the environmental quality varied greatly when different groups of pressure indicators were considered, which highlighted the importance of expanding environmental analysis to effects within the farm. Statistical analysis demonstrated positive correlations between all farmer decision and pressure group indicators. Consumption of mineral fertiliser and pesticide negatively influenced the cropping system. Furthermore, stocking rate was found to correlate positively with the milk production component and negatively with the farm system. This study provides baseline references for ex ante policy evaluation, and monitoring tools for analysis both in itinere and ex post environment policy implementation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The Design of a Graphical User Interface for an Electronic Classroom.

    ERIC Educational Resources Information Center

    Cahalan, Kathleen J.; Levin, Jacques

    2000-01-01

    Describes the design of a prototype for the graphical user interface component of an electronic classroom (ECR) application that supports real-time lectures and question-and-answer sessions between an instructor and students. Based on requirements analysis and an analysis of competing products, a Web-based ECR prototype was produced. Findings show…

  15. RF control at SSCL — an object oriented design approach

    NASA Astrophysics Data System (ADS)

    Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.

    1994-12-01

    The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.

  16. A Nonlinear Model for Gene-Based Gene-Environment Interaction.

    PubMed

    Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua

    2016-06-04

    A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.

  17. Simultaneous quantitation of 14 active components in Yinchenhao decoction with an ultrahigh performance liquid chromatography-diode array detector: Method development and ingredient analysis of different commonly prepared samples.

    PubMed

    Yi, YaXiong; Zhang, Yong; Ding, Yue; Lu, Lu; Zhang, Tong; Zhao, Yuan; Xu, XiaoJun; Zhang, YuXin

    2016-11-01

    J. Sep. Sci. 2016, 39, 4147-4157 DOI: 10.1002/jssc.201600284 Yinchenhao decoction (YCHD) is a famous Chinese herbal formula recorded in the Shang Han Lun which was prescribed by Zhongjing Zhang during 150-219 AD. A novel quantitative analysis method was developed, based on ultrahigh performance liquid chromatography coupled with a diode array detector for the simultaneous determination of 14 main active components in Yinchenhao decoction. Furthermore, the method has been applied for compositional difference analysis of the 14 components in eight normal extraction samples of Yinchenhao decoction, with the aid of hierarchical clustering analysis and similarity analysis. The present research could help hospital, factory and lab choose the best way to make Yinchenhao decoction with better efficacy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  19. Electrical Motor Current Signal Analysis using a Modulation Signal Bispectrum for the Fault Diagnosis of a Gearbox Downstream

    NASA Astrophysics Data System (ADS)

    Haram, M.; Wang, T.; Gu, F.; Ball, A. D.

    2012-05-01

    Motor current signal analysis has been an effective way for many years of monitoring electrical machines themselves. However, little work has been carried out in using this technique for monitoring their downstream equipment because of difficulties in extracting small fault components in the measured current signals. This paper investigates the characteristics of electrical current signals for monitoring the faults from a downstream gearbox using a modulation signal bispectrum (MSB), including phase effects in extracting small modulating components in a noisy measurement. An analytical study is firstly performed to understand amplitude, frequency and phase characteristics of current signals due to faults. It then explores the performance of MSB analysis in detecting weak modulating components in current signals. Experimental study based on a 10kw two stage gearbox, driven by a three phase induction motor, shows that MSB peaks at different rotational frequencies can be based to quantify the severity of gear tooth breakage and the degrees of shaft misalignment. In addition, the type and location of a fault can be recognized based on the frequency at which the change of MSB peak is the highest among different frequencies.

  20. Wavelength selection for portable noninvasive blood component measurement system based on spectral difference coefficient and dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling

    2018-03-01

    Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.

  1. Empirical evaluation of grouping of lower urinary tract symptoms: principal component analysis of Tampere Ageing Male Urological Study data.

    PubMed

    Pöyhönen, Antti; Häkkinen, Jukka T; Koskimäki, Juha; Hakama, Matti; Tammela, Teuvo L J; Auvinen, Anssi

    2013-03-01

    WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The ICS has divided LUTS into three groups: storage, voiding and post-micturition symptoms. The classification is based on anatomical, physiological and urodynamic considerations of a theoretical nature. We used principal component analysis (PCA) to determine the inter-correlations of various LUTS, which is a novel approach to research and can strengthen existing knowledge of the phenomenology of LUTS. After we had completed our analyses, another study was published that used a similar approach and results were very similar to those of the present study. We evaluated the constellation of LUTS using PCA of the data from a population-based study that included >4000 men. In our analysis, three components emerged from the 12 LUTS: voiding, storage and incontinence components. Our results indicated that incontinence may be separate from the other storage symptoms and post-micturition symptoms should perhaps be regarded as voiding symptoms. To determine how lower urinary tract symptoms (LUTS) relate to each other and assess if the classification proposed by the International Continence Society (ICS) is consistent with empirical findings. The information on urinary symptoms for this population-based study was collected using a self-administered postal questionnaire in 2004. The questionnaire was sent to 7470 men, aged 30-80 years, from Pirkanmaa County (Finland), of whom 4384 (58.7%) returned the questionnaire. The Danish Prostatic Symptom Score-1 questionnaire was used to evaluate urinary symptoms. Principal component analysis (PCA) was used to evaluate the inter-correlations among various urinary symptoms. The PCA produced a grouping of 12 LUTS into three categories consisting of voiding, storage and incontinence symptoms. Post-micturition symptoms were related to voiding symptoms, but incontinence symptoms were separate from storage symptoms. In the analyses by age group, similar categorization was found at ages 40, 50, 60 and 80 years, but only two groups of symptoms emerged among men aged 70 years. The prevalence among men aged 30 was too low for meaningful analysis. This population-based study suggests that LUTS can be divided into three subgroups consisting of voiding, storage and incontinence symptoms based on their inter-correlations. Our empirical findings suggest an alternative grouping of LUTS. The potential utility of such an approach requires careful consideration. © 2012 BJU International.

  2. From scenarios to domain models: processes and representations

    NASA Astrophysics Data System (ADS)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  3. A Comprehensive Strategy to Evaluate Compatible Stability of Chinese Medicine Injection and Infusion Solutions Based on Chemical Analysis and Bioactivity Assay.

    PubMed

    Li, Jian-Ping; Liu, Yang; Guo, Jian-Ming; Shang, Er-Xin; Zhu, Zhen-Hua; Zhu, Kevin Y; Tang, Yu-Ping; Zhao, Bu-Chang; Tang, Zhi-Shu; Duan, Jin-Ao

    2017-01-01

    Stability of traditional Chinese medicine injection (TCMI) is an important issue related with its clinical application. TCMI is composed of multi-components, therefore, when evaluating TCMI stability, several marker compounds cannot represent global components or biological activities of TCMI. Till now, when evaluating TCMI stability, method involving the global components or biological activities has not been reported. In this paper, we established a comprehensive strategy composed of three different methods to evaluate the chemical and biological stability of a typical TCMI, Danhong injection (DHI). UHPLC-TQ/MS was used to analyze the stability of marker compounds (SaA, SaB, RA, DSS, PA, CA, and SG) in DHI, UHPLC-QTOF/MS was used to analyze the stability of global components (MW 80-1000 Da) in DHI, and cell based antioxidant capability assay was used to evaluate the bioactivity of DHI. We applied this strategy to assess the compatible stability of DHI and six infusion solutions (GS, NS, GNS, FI, XI, and DGI), which were commonly used in combination with DHI in clinic. GS was the best infusion solution for DHI, and DGI was the worst one based on marker compounds analysis. Based on global components analysis, XI and DGI were the worst infusion solutions for DHI. And based on bioactivity assay, GS was the best infusion solution for DHI, and XI was the worst one. In conclusion, as evaluated by the established comprehensive strategy, GS was the best infusion solution, however, XI and DGI were the worst infusion solutions for DHI. In the compatibility of DHI and XI or DGI, salvianolic acids in DHI would be degraded, resulting in the reduction of original composition and generation of new components, and leading to the changes of biological activities. This is the essence of instability compatibility of DHI and some infusion solutions. Our study provided references for choosing the reasonable infusion solutions for DHI, which could contribute the improvement of safety and efficacy of DHI. Moreover, the established strategy may be applied for the compatible stability evaluation of other TCMIs.

  4. Higher-Order Theory: Structural/MicroAnalysis Code (HOTSMAC) Developed

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.

    2002-01-01

    The full utilization of advanced materials (be they composite or functionally graded materials) in lightweight aerospace components requires the availability of accurate analysis, design, and life-prediction tools that enable the assessment of component and material performance and reliability. Recently, a new commercially available software product called HOTSMAC (Higher-Order Theory--Structural/MicroAnalysis Code) was jointly developed by Collier Research Corporation, Engineered Materials Concepts LLC, and the NASA Glenn Research Center under funding provided by Glenn's Commercial Technology Office. The analytical framework for HOTSMAC is based on almost a decade of research into the coupled micromacrostructural analysis of heterogeneous materials. Consequently, HOTSMAC offers a comprehensive approach for analyzing/designing the response of components with various microstructural details, including certain advantages not always available in standard displacement-based finite element analysis techniques. The capabilities of HOTSMAC include combined thermal and mechanical analysis, time-independent and time-dependent material behavior, and internal boundary cells (e.g., those that can be used to represent internal cooling passages, see the preceding figure) to name a few. In HOTSMAC problems, materials can be randomly distributed and/or functionally graded (as shown in the figure, wherein the inclusions are distributed linearly), or broken down by strata, such as in the case of thermal barrier coatings or composite laminates.

  5. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  6. High-precision measurements of cementless acetabular components using model-based RSA: an experimental study.

    PubMed

    Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld

    2007-08-01

    In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.

  7. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  8. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  9. Multi-component determination and chemometric analysis of Paris polyphylla by ultra high performance liquid chromatography with photodiode array detection.

    PubMed

    Chen, Pei; Jin, Hong-Yu; Sun, Lei; Ma, Shuang-Cheng

    2016-09-01

    Multi-source analysis of traditional Chinese medicine is key to ensuring its safety and efficacy. Compared with traditional experimental differentiation, chemometric analysis is a simpler strategy to identify traditional Chinese medicines. Multi-component analysis plays an increasingly vital role in the quality control of traditional Chinese medicines. A novel strategy, based on chemometric analysis and quantitative analysis of multiple components, was proposed to easily and effectively control the quality of traditional Chinese medicines such as Chonglou. Ultra high performance liquid chromatography was more convenient and efficient. Five species of Chonglou were distinguished by chemometric analysis and nine saponins, including Chonglou saponins I, II, V, VI, VII, D, and H, as well as dioscin and gracillin, were determined in 18 min. The method is feasible and credible, and enables to improve quality control of traditional Chinese medicines and natural products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. GOMMA: a component-based infrastructure for managing and analyzing life science ontologies and their evolution

    PubMed Central

    2011-01-01

    Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205

  11. Implementation of the block-Krylov boundary flexibility method of component synthesis

    NASA Technical Reports Server (NTRS)

    Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.

    1993-01-01

    A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.

  12. Finite Element Analysis (FEA) in Design and Production.

    ERIC Educational Resources Information Center

    Waggoner, Todd C.; And Others

    1995-01-01

    Finite element analysis (FEA) enables industrial designers to analyze complex components by dividing them into smaller elements, then assessing stress and strain characteristics. Traditionally mainframe based, FEA is being increasingly used in microcomputers. (SK)

  13. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  14. Analyzing the Impact of a Data Analysis Process to Improve Instruction Using a Collaborative Model

    ERIC Educational Resources Information Center

    Good, Rebecca B.

    2006-01-01

    The Data Collaborative Model (DCM) assembles assessment literacy, reflective practices, and professional development into a four-component process. The sub-components include assessing students, reflecting over data, professional dialogue, professional development for the teachers, interventions for students based on data results, and re-assessing…

  15. A Content Analysis of College and University Viewbooks (Brochures).

    ERIC Educational Resources Information Center

    Hite, Robert E.; Yearwood, Alisa

    2001-01-01

    Systematically examined the content and components of college viewbooks/brochures. Compiled findings on: (1) physical components (e.g., photographs and slogans); (2) message content based on school characteristics such as size, type of school, enrollment, location, etc.; and (3) the type of image schools with different characteristics are seeking…

  16. Enhancing the Impact of Quality Points in Interteaching

    ERIC Educational Resources Information Center

    Rosales, Rocío; Soldner, James L.; Crimando, William

    2014-01-01

    Interteaching is a classroom instruction approach based on behavioral principles that offers increased flexibility to instructors. There are several components of interteaching that may contribute to its demonstrated efficacy. In a prior analysis of one of these components, the quality points contingency, no significant difference was reported in…

  17. Insight and Evidence Motivating the Simplification of Dual-Analysis Hybrid Systems into Single-Analysis Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.

    2018-01-01

    Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.

  18. Multi-Response Extraction Optimization Based on Anti-Oxidative Activity and Quality Evaluation by Main Indicator Ingredients Coupled with Chemometric Analysis on Thymus quinquecostatus Celak.

    PubMed

    Chang, Yan-Li; Shen, Meng; Ren, Xue-Yang; He, Ting; Wang, Le; Fan, Shu-Sheng; Wang, Xiu-Huan; Li, Xiao; Wang, Xiao-Ping; Chen, Xiao-Yi; Sui, Hong; She, Gai-Mei

    2018-04-19

    Thymus quinquecostatus Celak is a species of thyme in China and it used as condiment and herbal medicine for a long time. To set up the quality evaluation of T. quinquecostatus , the response surface methodology (RSM) based on its 2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity was introduced to optimize the extraction condition, and the main indicator components were found through an UPLC-LTQ-Orbitrap MS n method. The ethanol concentration, solid-liquid ratio, and extraction time on optimum conditions were 42.32%, 1:17.51, and 1.8 h, respectively. 35 components having 12 phenolic acids and 23 flavonoids were unambiguously or tentatively identified both positive and negative modes to employ for the comprehensive analysis in the optimum anti-oxidative part. A simple, reliable, and sensitive HPLC method was performed for the multi-component quantitative analysis of T. quinquecostatus using six characteristic and principal phenolic acids and flavonoids as reference compounds. Furthermore, the chemometrics methods (principal components analysis (PCA) and hierarchical clustering analysis (HCA)) appraised the growing areas and harvest time of this herb closely relative to the quality-controlled. This study provided full-scale qualitative and quantitative information for the quality evaluation of T. quinquecostatus , which would be a valuable reference for further study and development of this herb and related laid the foundation of further study on its pharmacological efficacy.

  19. Discovery of Empirical Components by Information Theory

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-VA-TR-2016-0289 Discovery of Empirical Components by Information Theory Amit Singer TRUSTEES OF PRINCETON UNIVERSITY 1 NASSAU HALL...3. DATES COVERED (From - To) 15 Feb 2013 to 14 Feb 2016 5a. CONTRACT NUMBER Discovery of Empirical Components by Information Theory 5b. GRANT...they draw not only from traditional linear algebra based numerical analysis or approximation theory , but also from information theory , graph theory

  20. Characterization of extracellular polymeric substances in biofilms under long-term exposure to ciprofloxacin antibiotic using fluorescence excitation-emission matrix and parallel factor analysis.

    PubMed

    Gu, Chaochao; Gao, Pin; Yang, Fan; An, Dongxuan; Munir, Mariya; Jia, Hanzhong; Xue, Gang; Ma, Chunyan

    2017-05-01

    The presence of antibiotic residues in the environment has been regarded as an emerging concern due to their potential adverse environmental consequences such as antibiotic resistance. However, the interaction between antibiotics and extracellular polymeric substances (EPSs) of biofilms in wastewater treatment systems is not entirely clear. In this study, the effect of ciprofloxacin (CIP) antibiotic on biofilm EPS matrix was investigated and characterized using fluorescence excitation-emission matrix (EEM) and parallel factor (PARAFAC) analysis. Physicochemical analysis showed that the proteins were the major EPS fraction, and their contents increased gradually with an increase in CIP concentration (0-300 μg/L). Based on the characterization of biofilm tightly bound EPS (TB-EPS) by EEM, three fluorescent components were identified by PARAFAC analysis. Component C1 was associated with protein-like substances, and components C2 and C3 belonged to humic-like substances. Component C1 exhibited an increasing trend as the CIP addition increased. Pearson's correlation results showed that CIP correlated significantly with the protein contents and component C1, while strong correlations were also found among UV 254 , dissolved organic carbon, humic acids, and component C3. A combined use of EEM-PARAFAC analysis and chemical measurements was demonstrated as a favorable approach for the characterization of variations in biofilm EPS in the presence of CIP antibiotic.

  1. Methodology for cost analysis of film-based and filmless portable chest systems

    NASA Astrophysics Data System (ADS)

    Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.

    1996-05-01

    Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.

  2. SensA: web-based sensitivity analysis of SBML models.

    PubMed

    Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W

    2014-10-01

    SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.

  3. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  4. Manufacturing Analysis | Energy Analysis | NREL

    Science.gov Websites

    , state, and community levels. Solar photovoltaic manufacturing cost analysis Examining the regional competitiveness of solar photovoltaic manufacturing points to access to capital as a critical component for scale of rare material-based photovoltaic PV technology deployment may influence the United States

  5. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  6. [In vitro transdermal delivery of the active fraction of xiangfusiwu decoction based on principal component analysis].

    PubMed

    Li, Zhen-Hao; Liu, Pei; Qian, Da-Wei; Li, Wei; Shang, Er-Xin; Duan, Jin-Ao

    2013-06-01

    The objective of the present study was to establish a method based on principal component analysis (PCA) for the study of transdermal delivery of multiple components in Chinese medicine, and to choose the best penetration enhancers for the active fraction of Xiangfusiwu decoction (BW) with this method. Improved Franz diffusion cells with isolated rat abdomen skins were carried out to experiment on the transdermal delivery of six active components, including ferulic acid, paeoniflorin, albiflorin, protopine, tetrahydropalmatine and tetrahydrocolumbamine. The concentrations of these components were determined by LC-MS/MS, then the total factor scores of the concentrations at different times were calculated using PCA and were employed instead of the concentrations to compute the cumulative amounts and steady fluxes, the latter of which were considered as the indexes for optimizing penetration enhancers. The results showed that compared to the control group, the steady fluxes of the other groups increased significantly and furthermore, 4% azone with 1% propylene glycol manifested the best effect. The six components could penetrate through skin well under the action of penetration enhancers. The method established in this study has been proved to be suitable for the study of transdermal delivery of multiple components, and it provided a scientific basis for preparation research of Xiangfusiwu decoction and moreover, it could be a reference for Chinese medicine research.

  7. Model-free fMRI group analysis using FENICA.

    PubMed

    Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E

    2011-03-01

    Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Using principal component analysis and annual seasonal trend analysis to assess karst rocky desertification in southwestern China.

    PubMed

    Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua

    2017-06-01

    Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3  km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.

  9. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  10. Strategies and Approaches to TPS Design

    NASA Technical Reports Server (NTRS)

    Kolodziej, Paul

    2005-01-01

    Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.

  11. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  12. Space Shuttle critical function audit

    NASA Technical Reports Server (NTRS)

    Sacks, Ivan J.; Dipol, John; Su, Paul

    1990-01-01

    A large fault-tolerance model of the main propulsion system of the US space shuttle has been developed. This model is being used to identify single components and pairs of components that will cause loss of shuttle critical functions. In addition, this model is the basis for risk quantification of the shuttle. The process used to develop and analyze the model is digraph matrix analysis (DMA). The DMA modeling and analysis process is accessed via a graphics-based computer user interface. This interface provides coupled display of the integrated system schematics, the digraph models, the component database, and the results of the fault tolerance and risk analyses.

  13. Cell module and fuel conditioner development

    NASA Technical Reports Server (NTRS)

    Feret, J. M.

    1981-01-01

    A phosphoric acid fuel cell (PAFC) stack design having a 10 kW power rating for operation at higher than atmospheric pressure based on the existing Mark II design configuration is described. Functional analysis, trade studies and thermodynamic cycle analysis for requirements definition and system operating parameter selection purposes were performed. Fuel cell materials and components, and performance testing and evaluation of the repeating electrode components were characterized. The state of the art manufacturing technology for all fuel cell components and the fabrication of short stacks of various sites were established. A 10 kW PAFC stack design for higher pressure operation utilizing the top down systems engineering aproach was developed.

  14. A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2011-01-01

    A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.

  15. Alerts Analysis and Visualization in Network-based Intrusion Detection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Dr. Li

    2010-08-01

    The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. Themore » second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker s behaviors.« less

  16. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  17. Application of ambient ionization high resolution mass spectrometry to determination of the botanical provenance of the constituents of psychoactive drug mixtures.

    PubMed

    Lesiak, Ashton D; Musah, Rabi A

    2016-09-01

    A continuing challenge in analytical chemistry is species-level determination of the constituents of mixtures that are made of a combination of plant species. There is an added urgency to identify components in botanical mixtures that have mind altering properties, due to the increasing global abuse of combinations of such plants. Here we demonstrate the proof of principle that ambient ionization mass spectrometry, namely direct analysis in real time-high resolution mass spectrometry (DART-HRMS), and statistical analysis tools can be used to rapidly determine the individual components within a psychoactive brew (Ayahuasca) made from a mixture of botanicals. Five plant species used in Ayahuasca preparations were subjected to DART-HRMS analysis. The chemical fingerprint of each was reproducible but unique, thus enabling discrimination between them. The presence of important biomarkers, including N,N-dimethyltryptamine, harmaline and harmine, was confirmed using in-source collision-induced dissociation (CID). Six Ayahuasca brews made from combinations of various plant species were shown to possess a high level of similarity, despite having been made from different constituents. Nevertheless, the application of principal component analysis (PCA) was useful in distinguishing between each of the brews based on the botanical species used in the preparations. From a training set based on 900 individual analyses, three principal components covered 86.38% of the variance, and the leave-one-out cross validation was 98.88%. This is the first report of ambient ionization MS being successfully used for determination of the individual components of plant mixtures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  19. Examination of Trends and Evidence-Based Elements in State Physical Education Legislation: A Content Analysis

    ERIC Educational Resources Information Center

    Eyler, Amy A.; Brownson, Ross C.; Aytur, Semra A.; Cradock, Angie L.; Doescher, Mark; Evenson, Kelly R.; Kerr, Jacqueline; Maddock, Jay; Pluto, Delores L.; Steinman, Lesley; Tompkins, Nancy O'Hara; Troped, Philip; Schmid, Thomas L.

    2010-01-01

    Objectives: To develop a comprehensive inventory of state physical education (PE) legislation, examine trends in bill introduction, and compare bill factors. Methods: State PE legislation from January 2001 to July 2007 was identified using a legislative database. Analysis included components of evidence-based school PE from the Community Guide and…

  20. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  1. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  2. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  3. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: The case of domain motions

    NASA Astrophysics Data System (ADS)

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-01

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  4. Slow dynamics in protein fluctuations revealed by time-structure based independent component analysis: the case of domain motions.

    PubMed

    Naritomi, Yusuke; Fuchigami, Sotaro

    2011-02-14

    Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.

  5. An integrated software suite for surface-based analyses of cerebral cortex.

    PubMed

    Van Essen, D C; Drury, H A; Dickson, J; Harwell, J; Hanlon, D; Anderson, C H

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database.

  6. An integrated software suite for surface-based analyses of cerebral cortex

    NASA Technical Reports Server (NTRS)

    Van Essen, D. C.; Drury, H. A.; Dickson, J.; Harwell, J.; Hanlon, D.; Anderson, C. H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database.

  7. Determination of the Characteristics and Classification of Near-Infrared Spectra of Patchouli Oil (Pogostemon Cablin Benth.) from Different Origin

    NASA Astrophysics Data System (ADS)

    Diego, M. C. R.; Purwanto, Y. A.; Sutrisno; Budiastra, I. W.

    2018-05-01

    Research related to the non-destructive method of near-infrared (NIR) spectroscopy in aromatic oil is still in development in Indonesia. The objectives of the study were to determine the characteristics of the near-infrared spectra of patchouli oil and classify it based on its origin. The samples were selected from seven different places in Indonesia (Bogor and Garut from West Java, Aceh, and Jambi from Sumatra and Konawe, Masamba and Kolaka from Sulawesi Island). The spectral data of patchouli oil was obtained by FT-NIR spectrometer at the wavelength of 1000-2500 nm, and after that, the samples were subjected to composition analysis using Gas Chromatography-Mass Spectrometry. The transmittance and absorbance spectra were analyzed and then principal component analysis (PCA) was carried out. Discriminant analysis (DA) of the principal component was developed to classify patchouli oil based on its origin. The result shows that the data of both spectra (transmittance and absorbance spectra) by the PC analysis give a similar result for discriminating the seven types of patchouli oil due to their distribution and behavior. The DA of the three principal component in both data processed spectra could classify patchouli oil accurately. This result exposed that NIR spectroscopy can be successfully used as a correct method to classify patchouli oil based on its origin.

  8. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  9. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis[W][OPEN

    PubMed Central

    Chen, Dijun; Neumann, Kerstin; Friedel, Swetlana; Kilian, Benjamin; Chen, Ming; Altmann, Thomas; Klukas, Christian

    2014-01-01

    Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues. PMID:25501589

  10. Rapid differentiation of Chinese hop varieties (Humulus lupulus) using volatile fingerprinting by HS-SPME-GC-MS combined with multivariate statistical analysis.

    PubMed

    Liu, Zechang; Wang, Liping; Liu, Yumei

    2018-01-18

    Hops impart flavor to beer, with the volatile components characterizing the various hop varieties and qualities. Fingerprinting, especially flavor fingerprinting, is often used to identify 'flavor products' because inconsistencies in the description of flavor may lead to an incorrect definition of beer quality. Compared to flavor fingerprinting, volatile fingerprinting is simpler and easier. We performed volatile fingerprinting using head space-solid phase micro-extraction gas chromatography-mass spectrometry combined with similarity analysis and principal component analysis (PCA) for evaluating and distinguishing between three major Chinese hops. Eighty-four volatiles were identified, which were classified into seven categories. Volatile fingerprinting based on similarity analysis did not yield any obvious result. By contrast, hop varieties and qualities were identified using volatile fingerprinting based on PCA. The potential variables explained the variance in the three hop varieties. In addition, the dendrogram and principal component score plot described the differences and classifications of hops. Volatile fingerprinting plus multivariate statistical analysis can rapidly differentiate between the different varieties and qualities of the three major Chinese hops. Furthermore, this method can be used as a reference in other fields. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  11. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China.

    PubMed

    Zhang, Yuji; Li, Xiaoju; Mao, Lu; Zhang, Mei; Li, Ke; Zheng, Yinxia; Cui, Wangfei; Yin, Hongpo; He, Yanli; Jing, Mingxia

    2018-01-01

    The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis. A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ 2 -test and a binary logistic regression model. This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications. Community management plays an important role in improving the patients' medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers.

  12. Gas chromatography/mass spectrometry based component profiling and quality prediction for Japanese sake.

    PubMed

    Mimura, Natsuki; Isogai, Atsuko; Iwashita, Kazuhiro; Bamba, Takeshi; Fukusaki, Eiichiro

    2014-10-01

    Sake is a Japanese traditional alcoholic beverage, which is produced by simultaneous saccharification and alcohol fermentation of polished and steamed rice by Aspergillus oryzae and Saccharomyces cerevisiae. About 300 compounds have been identified in sake, and the contribution of individual components to the sake flavor has been examined at the same time. However, only a few compounds could explain the characteristics alone and most of the attributes still remain unclear. The purpose of this study was to examine the relationship between the component profile and the attributes of sake. Gas chromatography coupled with mass spectrometry (GC/MS)-based non-targeted analysis was employed to obtain the low molecular weight component profile of Japanese sake including both nonvolatile and volatile compounds. Sake attributes and overall quality were assessed by analytical descriptive sensory test and the prediction model of the sensory score from the component profile was constructed by means of orthogonal projections to latent structures (OPLS) regression analysis. Our results showed that 12 sake attributes [ginjo-ka (aroma of premium ginjo sake), grassy/aldehydic odor, sweet aroma/caramel/burnt odor, sulfury odor, sour taste, umami, bitter taste, body, amakara (dryness), aftertaste, pungent/smoothness and appearance] and overall quality were accurately explained by component profiles. In addition, we were able to select statistically significant components according to variable importance on projection (VIP). Our methodology clarified the correlation between sake attribute and 200 low molecular components and presented the importance of each component thus, providing new insights to the flavor study of sake. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  13. SUSTAINABILITY. Response to Comment on "Planetary boundaries: Guiding human development on a changing planet".

    PubMed

    Gerten, Dieter; Rockström, Johan; Heinke, Jens; Steffen, Will; Richardson, Katherine; Cornell, Sarah

    2015-06-12

    Jaramillo and Destouni claim that freshwater consumption is beyond the planetary boundary, based on high estimates of water cycle components, different definitions of water consumption, and extrapolation from a single case study. The difference from our analysis, based on mainstream assessments of global water consumption, highlights the need for clearer definitions of water cycle components and improved models and databases. Copyright © 2015, American Association for the Advancement of Science.

  14. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  15. New methodologies for multi-scale time-variant reliability analysis of complex lifeline networks

    NASA Astrophysics Data System (ADS)

    Kurtz, Nolan Scot

    The cost of maintaining existing civil infrastructure is enormous. Since the livelihood of the public depends on such infrastructure, its state must be managed appropriately using quantitative approaches. Practitioners must consider not only which components are most fragile to hazard, e.g. seismicity, storm surge, hurricane winds, etc., but also how they participate on a network level using network analysis. Focusing on particularly damaged components does not necessarily increase network functionality, which is most important to the people that depend on such infrastructure. Several network analyses, e.g. S-RDA, LP-bounds, and crude-MCS, and performance metrics, e.g. disconnection bounds and component importance, are available for such purposes. Since these networks are existing, the time state is also important. If networks are close to chloride sources, deterioration may be a major issue. Information from field inspections may also have large impacts on quantitative models. To address such issues, hazard risk analysis methodologies for deteriorating networks subjected to seismicity, i.e. earthquakes, have been created from analytics. A bridge component model has been constructed for these methodologies. The bridge fragilities, which were constructed from data, required a deeper level of analysis as these were relevant for specific structures. Furthermore, chloride-induced deterioration network effects were investigated. Depending on how mathematical models incorporate new information, many approaches are available, such as Bayesian model updating. To make such procedures more flexible, an adaptive importance sampling scheme was created for structural reliability problems. Additionally, such a method handles many kinds of system and component problems with singular or multiple important regions of the limit state function. These and previously developed analysis methodologies were found to be strongly sensitive to the network size. Special network topologies may be more or less computationally difficult, while the resolution of the network also has large affects. To take advantage of some types of topologies, network hierarchical structures with super-link representation have been used in the literature to increase the computational efficiency by analyzing smaller, densely connected networks; however, such structures were based on user input and subjective at times. To address this, algorithms must be automated and reliable. These hierarchical structures may indicate the structure of the network itself. This risk analysis methodology has been expanded to larger networks using such automated hierarchical structures. Component importance is the most important objective from such network analysis; however, this may only provide the information of which bridges to inspect/repair earliest and little else. High correlations influence such component importance measures in a negative manner. Additionally, a regional approach is not appropriately modelled. To investigate a more regional view, group importance measures based on hierarchical structures have been created. Such structures may also be used to create regional inspection/repair approaches. Using these analytical, quantitative risk approaches, the next generation of decision makers may make both component and regional-based optimal decisions using information from both network function and further effects of infrastructure deterioration.

  16. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  17. Analysing Normal and Partial Glossectomee Tongues Using Ultrasound

    ERIC Educational Resources Information Center

    Bressmann, Tim; Uy, Catherine; Irish, Jonathan C.

    2005-01-01

    The present study aimed at identifying underlying parameters that govern the shape of the tongue. A functional topography of the tongue surface was developed based on three-dimensional ultrasound scans of sustained speech sounds in ten normal subjects. A principal component analysis extracted three components that explained 89.2% of the variance…

  18. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    ERIC Educational Resources Information Center

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  19. A new multicriteria risk mapping approach based on a multiattribute frontier concept

    Treesearch

    Denys Yemshanov; Frank H. Koch; Yakov Ben-Haim; Marla Downing; Frank Sapio; Marty Siltanen

    2013-01-01

    Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that...

  20. Pharmacophore modeling, docking, and principal component analysis based clustering: combined computer-assisted approaches to identify new inhibitors of the human rhinovirus coat protein.

    PubMed

    Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry

    2005-10-06

    The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.

  1. Preliminary analysis of a membrane-based atmosphere-control subsystem

    NASA Technical Reports Server (NTRS)

    Mccray, Scott B.; Newbold, David D.; Ray, Rod; Ogle, Kathryn

    1993-01-01

    Controlled ecological life supprot systems will require subsystems for maintaining the consentrations of atmospheric gases within acceptable ranges in human habitat chambers and plant growth chambers. The goal of this work was to develop a membrane-based atmosphere comntrol (MBAC) subsystem that allows the controlled exchange of atmospheric componets (e.g., oxygen, carbon dioxide, and water vapor) between these chambers. The MBAC subsystem promises to offer a simple, nonenergy intensive method to separate, store and exchange atmospheric components, producing optimal concentrations of components in each chamber. In this paper, the results of a preliminary analysis of the MBAC subsystem for control of oxygen and nitrogen are presented. Additionally, the MBAC subsystem and its operation are described.

  2. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  3. Study on Web-Based Tool for Regional Agriculture Industry Structure Optimization Using Ajax

    NASA Astrophysics Data System (ADS)

    Huang, Xiaodong; Zhu, Yeping

    According to the research status of regional agriculture industry structure adjustment information system and the current development of information technology, this paper takes web-based regional agriculture industry structure optimization tool as research target. This paper introduces Ajax technology and related application frameworks to build an auxiliary toolkit of decision support system for agricultural policy maker and economy researcher. The toolkit includes a “one page” style component of regional agriculture industry structure optimization which provides agile arguments setting method that enables applying sensitivity analysis and usage of data and comparative advantage analysis result, and a component that can solve the linear programming model and its dual problem by simplex method.

  4. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  5. Solid Modeling of Crew Exploration Vehicle Structure Concepts for Mass Optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2006-01-01

    Parametric solid and surface models of the crew exploration vehicle (CEV) command module (CM) structure concepts are developed for rapid finite element analyses, structural sizing and estimation of optimal structural mass. The effects of the structural configuration and critical design parameters on the stress distribution are visualized, examined to arrive at an efficient design. The CM structural components consisted of the outer heat shield, inner pressurized crew cabin, ring bulkhead and spars. For this study only the internal cabin pressure load case is considered. Component stress, deflection, margins of safety and mass are used as design goodness criteria. The design scenario is explored by changing the component thickness parameters and materials until an acceptable design is achieved. Aluminum alloy, titanium alloy and an advanced composite material properties are considered for the stress analysis and the results are compared as a part of lessons learned and to build up a structural component sizing knowledge base for the future CEV technology support. This independent structural analysis and the design scenario based optimization process may also facilitate better CM structural definition and rapid prototyping.

  6. SSME Post Test Diagnostic System: Systems Section

    NASA Technical Reports Server (NTRS)

    Bickmore, Timothy

    1995-01-01

    An assessment of engine and component health is routinely made after each test firing or flight firing of a Space Shuttle Main Engine (SSME). Currently, this health assessment is done by teams of engineers who manually review sensor data, performance data, and engine and component operating histories. Based on review of information from these various sources, an evaluation is made as to the health of each component of the SSME and the preparedness of the engine for another test or flight. The objective of this project - the SSME Post Test Diagnostic System (PTDS) - is to develop a computer program which automates the analysis of test data from the SSME in order to detect and diagnose anomalies. This report primarily covers work on the Systems Section of the PTDS, which automates the analyses performed by the systems/performance group at the Propulsion Branch of NASA Marshall Space Flight Center (MSFC). This group is responsible for assessing the overall health and performance of the engine, and detecting and diagnosing anomalies which involve multiple components (other groups are responsible for analyzing the behavior of specific components). The PTDS utilizes several advanced software technologies to perform its analyses. Raw test data is analyzed using signal processing routines which detect features in the data, such as spikes, shifts, peaks, and drifts. Component analyses are performed by expert systems, which use 'rules-of-thumb' obtained from interviews with the MSFC data analysts to detect and diagnose anomalies. The systems analysis is performed using case-based reasoning. Results of all analyses are stored in a relational database and displayed via an X-window-based graphical user interface which provides ranked lists of anomalies and observations by engine component, along with supporting data plots for each.

  7. Classification and identification of Rhodobryum roseum Limpr. and its adulterants based on fourier-transform infrared spectroscopy (FTIR) and chemometrics.

    PubMed

    Cao, Zhen; Wang, Zhenjie; Shang, Zhonglin; Zhao, Jiancheng

    2017-01-01

    Fourier-transform infrared spectroscopy (FTIR) with the attenuated total reflectance technique was used to identify Rhodobryum roseum from its four adulterants. The FTIR spectra of six samples in the range from 4000 cm-1 to 600 cm-1 were obtained. The second-derivative transformation test was used to identify the small and nearby absorption peaks. A cluster analysis was performed to classify the spectra in a dendrogram based on the spectral similarity. Principal component analysis (PCA) was used to classify the species of six moss samples. A cluster analysis with PCA was used to identify different genera. However, some species of the same genus exhibited highly similar chemical components and FTIR spectra. Fourier self-deconvolution and discrete wavelet transform (DWT) were used to enhance the differences among the species with similar chemical components and FTIR spectra. Three scales were selected as the feature-extracting space in the DWT domain. The results show that FTIR spectroscopy with chemometrics is suitable for identifying Rhodobryum roseum and its adulterants.

  8. Early warning reporting categories analysis of recall and complaints data.

    DOT National Transportation Integrated Search

    2001-12-31

    This analysis was performed to assist the National Highway Traffic Safety Administration (NHTSA) in identifying components and systems to be included in early warning reporting (EWR) categories that would be based upon historical safety-related recal...

  9. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  10. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  11. [A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].

    PubMed

    Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei

    2010-04-01

    It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.

  12. Micrometer-scale particle sizing by laser diffraction: critical impact of the imaginary component of refractive index.

    PubMed

    Beekman, Alice; Shan, Daxian; Ali, Alana; Dai, Weiguo; Ward-Smith, Stephen; Goldenberg, Merrill

    2005-04-01

    This study evaluated the effect of the imaginary component of the refractive index on laser diffraction particle size data for pharmaceutical samples. Excipient particles 1-5 microm in diameter (irregular morphology) were measured by laser diffraction. Optical parameters were obtained and verified based on comparison of calculated vs. actual particle volume fraction. Inappropriate imaginary components of the refractive index can lead to inaccurate results, including false peaks in the size distribution. For laser diffraction measurements, obtaining appropriate or "effective" imaginary components of the refractive index was not always straightforward. When the recommended criteria such as the concentration match and the fit of the scattering data gave similar results for very different calculated size distributions, a supplemental technique, microscopy with image analysis, was used to decide between the alternatives. Use of effective optical parameters produced a good match between laser diffraction data and microscopy/image analysis data. The imaginary component of the refractive index can have a major impact on particle size results calculated from laser diffraction data. When performed properly, laser diffraction and microscopy with image analysis can yield comparable results.

  13. Estimation of Psychophysical Thresholds Based on Neural Network Analysis of DPOAE Input/Output Functions

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam; Long, Glenis

    2011-11-01

    The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.

  14. On reliable time-frequency characterization and delay estimation of stimulus frequency otoacoustic emissions

    NASA Astrophysics Data System (ADS)

    Biswal, Milan; Mishra, Srikanta

    2018-05-01

    The limited information on origin and nature of stimulus frequency otoacoustic emissions (SFOAEs) necessitates a thorough reexamination into SFOAE analysis procedures. This will lead to a better understanding of the generation of SFOAEs. The SFOAE response waveform in the time domain can be interpreted as a summation of amplitude modulated and frequency modulated component waveforms. The efficiency of a technique to segregate these components is critical to describe the nature of SFOAEs. Recent advancements in robust time-frequency analysis algorithms have staked claims on the more accurate extraction of these components, from composite signals buried in noise. However, their potential has not been fully explored for SFOAEs analysis. Indifference to distinct information, due to nature of these analysis techniques, may impact the scientific conclusions. This paper attempts to bridge this gap in literature by evaluating the performance of three linear time-frequency analysis algorithms: short-time Fourier transform (STFT), continuous Wavelet transform (CWT), S-transform (ST) and two nonlinear algorithms: Hilbert-Huang Transform (HHT), synchrosqueezed Wavelet transform (SWT). We revisit the extraction of constituent components and estimation of their magnitude and delay, by carefully evaluating the impact of variation in analysis parameters. The performance of HHT and SWT from the perspective of time-frequency filtering and delay estimation were found to be relatively less efficient for analyzing SFOAEs. The intrinsic mode functions of HHT does not completely characterize the reflection components and hence IMF based filtering alone, is not recommended for segregating principal emission from multiple reflection components. We found STFT, WT, and ST to be suitable for canceling multiple internal reflection components with marginal altering in SFOAE.

  15. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  16. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  17. The Researches on Damage Detection Method for Truss Structures

    NASA Astrophysics Data System (ADS)

    Wang, Meng Hong; Cao, Xiao Nan

    2018-06-01

    This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.

  18. Circuit-based versus full-wave modelling of active microwave circuits

    NASA Astrophysics Data System (ADS)

    Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.

    2018-03-01

    Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.

  19. Global warming potential of pavements

    NASA Astrophysics Data System (ADS)

    Santero, Nicholas J.; Horvath, Arpad

    2009-09-01

    Pavements comprise an essential and vast infrastructure system supporting our transportation network, yet their impact on the environment is largely unquantified. Previous life-cycle assessments have only included a limited number of the applicable life-cycle components in their analysis. This research expands the current view to include eight different components: materials extraction and production, transportation, onsite equipment, traffic delay, carbonation, lighting, albedo, and rolling resistance. Using global warming potential as the environmental indicator, ranges of potential impact for each component are calculated and compared based on the information uncovered in the existing research. The relative impacts between components are found to be orders of magnitude different in some cases. Context-related factors, such as traffic level and location, are also important elements affecting the impacts of a given component. A strategic method for lowering the global warming potential of a pavement is developed based on the concept that environmental performance is improved most effectively by focusing on components with high impact potentials. This system takes advantage of the fact that small changes in high-impact components will have more effect than large changes in low-impact components.

  20. JAMS - a software platform for modular hydrological modelling

    NASA Astrophysics Data System (ADS)

    Kralisch, Sven; Fischer, Christian

    2015-04-01

    Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.

  1. Deep overbite malocclusion: analysis of the underlying components.

    PubMed

    El-Dawlatly, Mostafa M; Fayed, Mona M Salah; Mostafa, Yehya A

    2012-10-01

    A deepbite malocclusion should not be approached as a disease entity; instead, it should be viewed as a clinical manifestation of underlying discrepancies. The aim of this study was to investigate the various skeletal and dental components of deep bite malocclusion, the significance of the contribution of each, and whether there are certain correlations between them. Dental and skeletal measurements were made on lateral cephalometric radiographs and study models of 124 patients with deepbite. These measurements were statistically analyzed. An exaggerated curve of Spee was the greatest shared dental component (78%), significantly higher than any other component (P = 0.0335). A decreased gonial angle was the greatest shared skeletal component (37.1%), highly significant compared with the other components (P = 0.0019). A strong positive correlation was found between the ramus/Frankfort horizontal angle and the gonial angle; weaker correlations were found between various components. An exaggerated curve of Spee and a decreased gonial angle were the greatest contributing components. This analysis of deepbite components could help clinicians design individualized mechanotherapies based on the underlying cause, rather than being biased toward predetermined mechanics when treating patients with a deepbite malocclusion. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  2. Design Choices for Thermofluid Flow Components and Systems that are Exported as Functional Mockup Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry

    This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what codemore » optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.« less

  3. Chemometric analysis of multisensor hyperspectral images of precipitated atmospheric particulate matter.

    PubMed

    Ofner, Johannes; Kamilli, Katharina A; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-09-15

    The chemometric analysis of multisensor hyperspectral data allows a comprehensive image-based analysis of precipitated atmospheric particles. Atmospheric particulate matter was precipitated on aluminum foils and analyzed by Raman microspectroscopy and subsequently by electron microscopy and energy dispersive X-ray spectroscopy. All obtained images were of the same spot of an area of 100 × 100 μm(2). The two hyperspectral data sets and the high-resolution scanning electron microscope images were fused into a combined multisensor hyperspectral data set. This multisensor data cube was analyzed using principal component analysis, hierarchical cluster analysis, k-means clustering, and vertex component analysis. The detailed chemometric analysis of the multisensor data allowed an extensive chemical interpretation of the precipitated particles, and their structure and composition led to a comprehensive understanding of atmospheric particulate matter.

  4. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  5. The risk of misclassifying subjects within principal component based asset index

    PubMed Central

    2014-01-01

    The asset index is often used as a measure of socioeconomic status in empirical research as an explanatory variable or to control confounding. Principal component analysis (PCA) is frequently used to create the asset index. We conducted a simulation study to explore how accurately the principal component based asset index reflects the study subjects’ actual poverty level, when the actual poverty level is generated by a simple factor analytic model. In the simulation study using the PC-based asset index, only 1% to 4% of subjects preserved their real position in a quintile scale of assets; between 44% to 82% of subjects were misclassified into the wrong asset quintile. If the PC-based asset index explained less than 30% of the total variance in the component variables, then we consistently observed more than 50% misclassification across quintiles of the index. The frequency of misclassification suggests that the PC-based asset index may not provide a valid measure of poverty level and should be used cautiously as a measure of socioeconomic status. PMID:24987446

  6. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  7. Structural damage continuous monitoring by using a data driven approach based on principal component analysis and cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid

    2017-05-01

    Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.

  8. PreSurgMapp: a MATLAB Toolbox for Presurgical Mapping of Eloquent Functional Areas Based on Task-Related and Resting-State Functional MRI.

    PubMed

    Huang, Huiyuan; Ding, Zhongxiang; Mao, Dewang; Yuan, Jianhua; Zhu, Fangmei; Chen, Shuda; Xu, Yan; Lou, Lin; Feng, Xiaoyan; Qi, Le; Qiu, Wusi; Zhang, Han; Zang, Yu-Feng

    2016-10-01

    The main goal of brain tumor surgery is to maximize tumor resection while minimizing the risk of irreversible postoperative functional sequelae. Eloquent functional areas should be delineated preoperatively, particularly for patients with tumors near eloquent areas. Functional magnetic resonance imaging (fMRI) is a noninvasive technique that demonstrates great promise for presurgical planning. However, specialized data processing toolkits for presurgical planning remain lacking. Based on several functions in open-source software such as Statistical Parametric Mapping (SPM), Resting-State fMRI Data Analysis Toolkit (REST), Data Processing Assistant for Resting-State fMRI (DPARSF) and Multiple Independent Component Analysis (MICA), here, we introduce an open-source MATLAB toolbox named PreSurgMapp. This toolbox can reveal eloquent areas using comprehensive methods and various complementary fMRI modalities. For example, PreSurgMapp supports both model-based (general linear model, GLM, and seed correlation) and data-driven (independent component analysis, ICA) methods and processes both task-based and resting-state fMRI data. PreSurgMapp is designed for highly automatic and individualized functional mapping with a user-friendly graphical user interface (GUI) for time-saving pipeline processing. For example, sensorimotor and language-related components can be automatically identified without human input interference using an effective, accurate component identification algorithm using discriminability index. All the results generated can be further evaluated and compared by neuro-radiologists or neurosurgeons. This software has substantial value for clinical neuro-radiology and neuro-oncology, including application to patients with low- and high-grade brain tumors and those with epilepsy foci in the dominant language hemisphere who are planning to undergo a temporal lobectomy.

  9. Residual-Mean Analysis of the Air-Sea Fluxes and Associated Oceanic Meridional Overturning

    DTIC Science & Technology

    2006-12-01

    the adiabatic component of the MOC which is based entirely on the sea surface data . The coordinate system introduced in this study is somewhat...heat capacity of water. The technique utilizes the observational data based on meteorological re- analysis (density flux at the sea surface) and...Figure 8. Annual mean and temporal standard deviation of the zonally-averaged mixed- layer depth. The plotted data are based on Levitus 94 climatology

  10. Grasp-Based Functional Coupling Between Reach- and Grasp-Related Components of Forelimb Muscle Activity

    PubMed Central

    Geed, Shashwati; van Kan, Peter L. E.

    2017-01-01

    How are appropriate combinations of forelimb muscles selected during reach-to-grasp movements in the presence of neuromotor redundancy and important task-related constraints? The authors tested whether grasp type or target location preferentially influence the selection and synergistic coupling between forelimb muscles during reach-to-grasp movements. Factor analysis applied to 14–20 forelimb electromyograms recorded from monkeys performing reach-to-grasp tasks revealed 4–6 muscle components that showed transport/preshape- or grasp-related features. Weighting coefficients of transport/preshape-related components demonstrated strongest similarities for reaches that shared the same grasp type rather than the same target location. Scaling coefficients of transport/preshape- and grasp-related components showed invariant temporal coupling. Thus, grasp type influenced strongly both transport/preshape- and grasp-related muscle components, giving rise to grasp-based functional coupling between forelimb muscles. PMID:27589010

  11. A gravitational lens candidate discovered with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Maoz, Dan; Bahcall, John N.; Schneider, Donald P.; Doxsey, Rodger; Bahcall, Neta A.; Filippenko, Alexei V.; Goss, W. M.; Lahav, Ofer; Yanny, Brian

    1992-01-01

    Evidence is reported for gravitational lensing of the high-redshift (z = 3.8) quasar 1208 + 101, observed as part of the Snapshot survey with the HST Planetary Camera. An HST V image taken on gyroscopes resolves the quasar into three point-source components, with the two fainter images having separations of 0.1 and 0.5 arcsec from the central bright component. A radio observation of the quasar with the VLA at 2 cm shows that, like most quasars of this redhsift, 1208 + 101 is radio quiet. Based on positional information alone, the probability that the observed optical components are chance superpositions of Galactic stars is small, but not negligible. Analysis of a combined ground-based spectrum of all three components, using the relative brightnesses of the HST image, supports the lensing hypothesis. If all the components are lensed images of the quasar, the observed configuration cannot be reproduced by simple lens models.

  12. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  13. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  14. An ICT Adoption Framework for Education: A Case Study in Public Secondary School of Indonesia

    NASA Astrophysics Data System (ADS)

    Nurjanah, S.; Santoso, H. B.; Hasibuan, Z. A.

    2017-01-01

    This paper presents preliminary research findings on the ICT adoption framework for education. Despite many studies have been conducted on ICT adoption framework in education at various countries, they are lack of analysis on the degree of component contribution to the success to the framework. In this paper a set of components that link to ICT adoption in education is observed based on literatures and explorative analysis. The components are Infrastructure, Application, User Skills, Utilization, Finance, and Policy. The components are used as a basis to develop a questionnaire to capture the current ICT adoption condition in schools. The data from questionnaire are processed using Structured Equation Model (SEM). The results show that each component contributes differently to the ICT adoption framework. Finance provides the strongest affect to Infrastructure readiness, whilst User Skills provides the strongest affect to Utilization. The study concludes that development of ICT adoption framework should consider components contribution weights among the components that can be used to guide the implementation of ICT adoption in education.

  15. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  16. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  17. Spatially resolved spectroscopy analysis of the XMM-Newton large program on SN1006

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Tao; Decourchelle, Anne; Miceli, Marco; Vink, Jacco; Bocchino, Fabrizio

    2016-04-01

    We perform analysis of the XMM-Newton large program on SN1006 based on our newly developed methods of spatially resolved spectroscopy analysis. We extract spectra from low and high resolution meshes. The former (3596 meshes) is used to roughly decompose the thermal and non-thermal components and characterize the spatial distributions of different parameters, such as temperature, abundances of different elements, ionization age, and electron density of the thermal component, as well as photon index and cutoff frequency of the non-thermal component. On the other hand, the low resolution meshes (583 meshes) focus on the interior region dominated by the thermal emission and have enough counts to well characterize the Si lines. We fit the spectra from the low resolution meshes with different models, in order to decompose the multiple plasma components at different thermal and ionization states and compare their spatial distributions. In this poster, we will present the initial results of this project.

  18. Molecular characterization of banana bunchy top virus isolate from Sri Lanka and its genetic relationship with other isolates.

    PubMed

    Wickramaarachchi, W A R T; Shankarappa, K S; Rangaswamy, K T; Maruthi, M N; Rajapakse, R G A S; Ghosh, Saptarshi

    2016-06-01

    Bunchy top disease of banana caused by Banana bunchy top virus (BBTV, genus Babuvirus family Nanoviridae) is one of the most important constraints in production of banana in the different parts of the world. Six genomic DNA components of BBTV isolate from Kandy, Sri Lanka (BBTV-K) were amplified by polymerase chain reaction (PCR) with specific primers using total DNA extracted from banana tissues showing typical symptoms of bunchy top disease. The amplicons were of expected size of 1.0-1.1 kb, which were cloned and sequenced. Analysis of sequence data revealed the presence of six DNA components; DNA-R, DNA-U3, DNA-S, DNA-N, DNA-M and DNA-C for Sri Lanka isolate. Comparisons of sequence data of DNA components followed by the phylogenetic analysis, grouped Sri Lanka-(Kandy) isolate in the Pacific Indian Oceans (PIO) group. Sri Lanka-(Kandy) isolate of BBTV is classified a new member of PIO group based on analysis of six components of the virus.

  19. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  20. Principal component analysis of indocyanine green fluorescence dynamics for diagnosis of vascular diseases

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee

    2015-03-01

    Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.

  1. Characterizing natural colloidal/particulate-protein interactions using fluorescence-based techniques and principal component analysis.

    PubMed

    Peiris, Ramila H; Ignagni, Nicholas; Budman, Hector; Moresoli, Christine; Legge, Raymond L

    2012-09-15

    Characterization of the interactions between natural colloidal/particulate- and protein-like matter is important for understanding their contribution to different physiochemical phenomena like membrane fouling, adsorption of bacteria onto surfaces and various applications of nanoparticles in nanomedicine and nanotoxicology. Precise interpretation of the extent of such interactions is however hindered due to the limitations of most characterization methods to allow rapid, sensitive and accurate measurements. Here we report on a fluorescence-based excitation-emission matrix (EEM) approach in combination with principal component analysis (PCA) to extract information related to the interaction between natural colloidal/particulate- and protein-like matter. Surface plasmon resonance (SPR) analysis and fiber-optic probe based surface fluorescence measurements were used to confirm that the proposed approach can be used to characterize colloidal/particulate-protein interactions at the physical level. This method has potential to be a fundamental measurement of these interactions with the advantage that it can be performed rapidly and with high sensitivity. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  3. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  4. Visual target modulation of functional connectivity networks revealed by self-organizing group ICA.

    PubMed

    van de Ven, Vincent; Bledowski, Christoph; Prvulovic, David; Goebel, Rainer; Formisano, Elia; Di Salle, Francesco; Linden, David E J; Esposito, Fabrizio

    2008-12-01

    We applied a data-driven analysis based on self-organizing group independent component analysis (sogICA) to fMRI data from a three-stimulus visual oddball task. SogICA is particularly suited to the investigation of the underlying functional connectivity and does not rely on a predefined model of the experiment, which overcomes some of the limitations of hypothesis-driven analysis. Unlike most previous applications of ICA in functional imaging, our approach allows the analysis of the data at the group level, which is of particular interest in high order cognitive studies. SogICA is based on the hierarchical clustering of spatially similar independent components, derived from single subject decompositions. We identified four main clusters of components, centered on the posterior cingulate, bilateral insula, bilateral prefrontal cortex, and right posterior parietal and prefrontal cortex, consistently across all participants. Post hoc comparison of time courses revealed that insula, prefrontal cortex and right fronto-parietal components showed higher activity for targets than for distractors. Activation for distractors was higher in the posterior cingulate cortex, where deactivation was observed for targets. While our results conform to previous neuroimaging studies, they also complement conventional results by showing functional connectivity networks with unique contributions to the task that were consistent across subjects. SogICA can thus be used to probe functional networks of active cognitive tasks at the group-level and can provide additional insights to generate new hypotheses for further study. Copyright 2007 Wiley-Liss, Inc.

  5. An Integrated Software Suite for Surface-based Analyses of Cerebral Cortex

    PubMed Central

    Van Essen, David C.; Drury, Heather A.; Dickson, James; Harwell, John; Hanlon, Donna; Anderson, Charles H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database. PMID:11522765

  6. Influence of running stride frequency in heart rate variability analysis during treadmill exercise testing.

    PubMed

    Bailón, Raquel; Garatachea, Nuria; de la Iglesia, Ignacio; Casajús, Jose Antonio; Laguna, Pablo

    2013-07-01

    The analysis and interpretation of heart rate variability (HRV) during exercise is challenging not only because of the nonstationary nature of exercise, the time-varying mean heart rate, and the fact that respiratory frequency exceeds 0.4 Hz, but there are also other factors, such as the component centered at the pedaling frequency observed in maximal cycling tests, which may confuse the interpretation of HRV analysis. The objectives of this study are to test the hypothesis that a component centered at the running stride frequency (SF) appears in the HRV of subjects during maximal treadmill exercise testing, and to study its influence in the interpretation of the low-frequency (LF) and high-frequency (HF) components of HRV during exercise. The HRV of 23 subjects during maximal treadmill exercise testing is analyzed. The instantaneous power of different HRV components is computed from the smoothed pseudo-Wigner-Ville distribution of the modulating signal assumed to carry information from the autonomic nervous system, which is estimated based on the time-varying integral pulse frequency modulation model. Besides the LF and HF components, the appearance is revealed of a component centered at the running SF as well as its aliases. The power associated with the SF component and its aliases represents 22±7% (median±median absolute deviation) of the total HRV power in all the subjects. Normalized LF power decreases as the exercise intensity increases, while normalized HF power increases. The power associated with the SF does not change significantly with exercise intensity. Consideration of the running SF component and its aliases is very important in HRV analysis since stride frequency aliases may overlap with LF and HF components.

  7. Dynamics of Rotating Multi-component Turbomachinery Systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles

    1993-01-01

    The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.

  8. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  9. Carrier Based Air Logistics Study: Maintenance Analysis.

    DTIC Science & Technology

    1982-01-01

    MONITORING AGENCY NAME & ADDRESS(If dIierent loan Controling 01116.) 1S. SECURITY CLASS. (of Od. report) gel Unclassified IS&. DECL ASSI IlCATION/ OOWNGRAOIN...Management System AECL Avionics Equipment Configuration List AIMD Aircraft Intermediate Maintenance Department ASO Aviation Supply Office ASW...implementation. Component-specific data, and indentured[2] relationships between components extracted from the Aviation Supply Office ( ASO ) weapon

  10. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  11. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    PubMed

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  12. Management status of end-of-life vehicles and development strategies of used automotive electronic control components recycling industry in China.

    PubMed

    Wang, Junjun; Chen, Ming

    2012-11-01

    Recycling companies play a leading role in the system of end-of-life vehicles (ELVs) in China. Automotive manufacturers in China are rarely involved in recycling ELVs, and they seldom provide dismantling information for recycling companies. In addition, no professional shredding plant is available. The used automotive electronic control components recycling industry in China has yet to take shape because of the lack of supporting technology and profitable models. Given the rapid growth of the vehicle population and electronic control units in automotives in China, the used automotive electronic control components recycling industry requires immediate development. This paper analyses the current recycling system of ELVs in China and introduces the automotive product recycling technology roadmap as well as the recycling industry development goals. The strengths, weaknesses, opportunities and challenges of the current used automotive electronic control components recycling industry in China are analysed comprehensively based on the 'strengths, weaknesses, opportunities and threats' (SWOT) method. The results of the analysis indicate that this recycling industry responds well to all the factors and has good opportunities for development. Based on the analysis, new development strategies for the used automotive electronic control components recycling industry in accordance with the actual conditions of China are presented.

  13. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  14. Metabolite profiling of soy sauce using gas chromatography with time-of-flight mass spectrometry and analysis of correlation with quantitative descriptive analysis.

    PubMed

    Yamamoto, Shinya; Bamba, Takeshi; Sano, Atsushi; Kodama, Yukako; Imamura, Miho; Obata, Akio; Fukusaki, Eiichiro

    2012-08-01

    Soy sauces, produced from different ingredients and brewing processes, have variations in components and quality. Therefore, it is extremely important to comprehend the relationship between components and the sensory attributes of soy sauces. The current study sought to perform metabolite profiling in order to devise a method of assessing the attributes of soy sauces. Quantitative descriptive analysis (QDA) data for 24 soy sauce samples were obtained from well selected sensory panelists. Metabolite profiles primarily concerning low-molecular-weight hydrophilic components were based on gas chromatography with time-of-flightmass spectrometry (GC/TOFMS). QDA data for soy sauces were accurately predicted by projection to latent structure (PLS), with metabolite profiles serving as explanatory variables and QDA data set serving as a response variable. Moreover, analysis of correlation between matrices of metabolite profiles and QDA data indicated contributing compounds that were highly correlated with QDA data. Especially, it was indicated that sugars are important components of the tastes of soy sauces. This new approach which combines metabolite profiling with QDA is applicable to analysis of sensory attributes of food as a result of the complex interaction between its components. This approach is effective to search important compounds that contribute to the attributes. Copyright © 2012 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  15. Optimizing of MALDI-ToF-based low-molecular-weight serum proteome pattern analysis in detection of breast cancer patients; the effect of albumin removal on classification performance.

    PubMed

    Pietrowska, M; Marczak, L; Polanska, J; Nowicka, E; Behrent, K; Tarnawski, R; Stobiecki, M; Polanski, A; Widlak, P

    2010-01-01

    Mass spectrometry-based analysis of the serum proteome allows identifying multi-peptide patterns/signatures specific for blood of cancer patients, thus having high potential value for cancer diagnostics. However, because of problems with optimization and standardization of experimental and computational design, none of identified proteome patterns/signatures was approved for diagnostics in clinical practice as yet. Here we compared two methods of serum sample preparation for mass spectrometry-based proteome pattern analysis aimed to identify biomarkers that could be used in early detection of breast cancer patients. Blood samples were collected in a group of 92 patients diagnosed at early (I and II) stages of the disease before the start of therapy, and in a group of age-matched healthy controls (104 women). Serum specimens were purified and analyzed using MALDI-ToF spectrometry, either directly or after membrane filtration (50 kDa cut-off) to remove albumin and other large serum proteins. Mass spectra of the low-molecular-weight fraction (2-10 kDa) of the serum proteome were resolved using the Gaussian mixture decomposition, and identified spectral components were used to build classifiers that differentiated samples from breast cancer patients and healthy persons. Mass spectra of complete serum and membrane-filtered albumin-depleted samples have apparently different structure and peaks specific for both types of samples could be identified. The optimal classifier built for the complete serum specimens consisted of 8 spectral components, and had 81% specificity and 72% sensitivity, while that built for the membrane-filtered samples consisted of 4 components, and had 80% specificity and 81% sensitivity. We concluded that pre-processing of samples to remove albumin might be recommended before MALDI-ToF mass spectrometric analysis of the low-molecular-weight components of human serum Keywords: albumin removal; breast cancer; clinical proteomics; mass spectrometry; pattern analysis; serum proteome.

  16. 10 CFR 50.48 - Fire protection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... suppression systems; and (iii) The means to limit fire damage to structures, systems, or components important...) Standard 805, “Performance-Based Standard for Fire Protection for Light Water Reactor Electric Generating... pressurized-water reactors (PWRs) is not permitted. (iv) Uncertainty analysis. An uncertainty analysis...

  17. 10 CFR 50.48 - Fire protection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... suppression systems; and (iii) The means to limit fire damage to structures, systems, or components important...) Standard 805, “Performance-Based Standard for Fire Protection for Light Water Reactor Electric Generating... pressurized-water reactors (PWRs) is not permitted. (iv) Uncertainty analysis. An uncertainty analysis...

  18. Improving Cluster Analysis with Automatic Variable Selection Based on Trees

    DTIC Science & Technology

    2014-12-01

    regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value

  19. Involvement of the anterior cingulate cortex in time-based prospective memory task monitoring: An EEG analysis of brain sources using Independent Component and Measure Projection Analysis

    PubMed Central

    Burgos, Pablo; Kilborn, Kerry; Evans, Jonathan J.

    2017-01-01

    Objective Time-based prospective memory (PM), remembering to do something at a particular moment in the future, is considered to depend upon self-initiated strategic monitoring, involving a retrieval mode (sustained maintenance of the intention) plus target checking (intermittent time checks). The present experiment was designed to explore what brain regions and brain activity are associated with these components of strategic monitoring in time-based PM tasks. Method 24 participants were asked to reset a clock every four minutes, while performing a foreground ongoing word categorisation task. EEG activity was recorded and data were decomposed into source-resolved activity using Independent Component Analysis. Common brain regions across participants, associated with retrieval mode and target checking, were found using Measure Projection Analysis. Results Participants decreased their performance on the ongoing task when concurrently performed with the time-based PM task, reflecting an active retrieval mode that relied on withdrawal of limited resources from the ongoing task. Brain activity, with its source in or near the anterior cingulate cortex (ACC), showed changes associated with an active retrieval mode including greater negative ERP deflections, decreased theta synchronization, and increased alpha suppression for events locked to the ongoing task while maintaining a time-based intention. Activity in the ACC was also associated with time-checks and found consistently across participants; however, we did not find an association with time perception processing per se. Conclusion The involvement of the ACC in both aspects of time-based PM monitoring may be related to different functions that have been attributed to it: strategic control of attention during the retrieval mode (distributing attentional resources between the ongoing task and the time-based task) and anticipatory/decision making processing associated with clock-checks. PMID:28863146

  20. Process management using component thermal-hydraulic function classes

    DOEpatents

    Morman, James A.; Wei, Thomas Y. C.; Reifman, Jaques

    1999-01-01

    A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced.

  1. Process management using component thermal-hydraulic function classes

    DOEpatents

    Morman, J.A.; Wei, T.Y.C.; Reifman, J.

    1999-07-27

    A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced. 5 figs.

  2. Evaluation of methodologies for assessing the overall diet: dietary quality scores and dietary pattern analysis.

    PubMed

    Ocké, Marga C

    2013-05-01

    This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues.

  3. Separation of pedogenic and lithogenic components of magnetic susceptibility in the Chinese loess/palaeosol sequence as determined by the CBD procedure and a mixing analysis

    NASA Astrophysics Data System (ADS)

    Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.

    2000-08-01

    Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.

  4. Components of Effective Cognitive-Behavioral Therapy for Pediatric Headache: A Mixed Methods Approach

    PubMed Central

    Law, Emily F.; Beals-Erickson, Sarah E.; Fisher, Emma; Lang, Emily A.; Palermo, Tonya M.

    2017-01-01

    Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache. PMID:29503787

  5. Components of Effective Cognitive-Behavioral Therapy for Pediatric Headache: A Mixed Methods Approach.

    PubMed

    Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M

    2017-01-01

    Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.

  6. Different brain activations between own- and other-race face categorization: an fMRI study using group independent component analysis

    NASA Astrophysics Data System (ADS)

    Wei, Wenjuan; Liu, Jiangang; Dai, Ruwei; Feng, Lu; Li, Ling; Tian, Jie

    2014-03-01

    Previous behavioral research has proved that individuals process own- and other-race faces differently. One well-known effect is the other-race effect (ORE), which indicates that individuals categorize other-race faces more accurately and faster than own-race faces. The existed functional magnetic resonance imaging (fMRI) studies of the other-race effect mainly focused on the racial prejudice and the socio-affective differences towards own- and other-race face. In the present fMRI study, we adopted a race-categorization task to determine the activation level differences between categorizing own- and other-race faces. Thirty one Chinese participants who live in China with Chinese as the majority and who had no direct contact with Caucasian individual were recruited in the present study. We used the group independent component analysis (ICA), which is a method of blind source signal separation that has proven to be promising for analysis of fMRI data. We separated the entail data into 56 components which is estimated based on one subject using the Minimal Description Length (MDL) criteria. The components sorted based on the multiple linear regression temporal sorting criteria, and the fit regression parameters were used in performing statistical test to evaluate the task-relatedness of the components. The one way anova was performed to test the significance of the component time course in different conditions. Our result showed that the areas, which coordinates is similar to the right FFA coordinates that previous studies reported, were greater activated for own-race faces than other-race faces, while the precuneus showed greater activation for other-race faces than own-race faces.

  7. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  8. Correlation between audible noise and corona current generated by AC corona discharge in time and frequency domains

    NASA Astrophysics Data System (ADS)

    Li, Xuebao; Wang, Jing; Li, Yinfei; Zhang, Qian; Lu, Tiebing; Cui, Xiang

    2018-06-01

    Corona-generated audible noise is induced by the collisions between space charges and air molecules. It has been proven that there is a close correlation between audible noise and corona current from DC corona discharge. Analysis on the correlation between audible noise and corona current can promote the cognition of the generation mechanism of corona discharge. In this paper, time-domain waveforms of AC corona-generated audible noise and corona current are measured simultaneously. The one-to-one relationship between sound pressure pulses and corona current pulses can be found and is used to remove the interferences from background noise. After the interferences are removed, the linear correlated relationships between sound pressure pulse amplitude and corona current pulse amplitude are obtained through statistical analysis. Besides, frequency components at the harmonics of power frequency (50 Hz) can be found both in the frequency spectrums of audible noise and corona current through frequency analysis. Furthermore, the self-correlation relationships between harmonic components below 400 Hz with the 50 Hz component are analyzed for audible noise and corona current and corresponding empirical formulas are proposed to calculate the harmonic components based on the 50 Hz component. Finally, based on the AC corona discharge process and generation mechanism of audible noise and corona current, the correlation between audible noise and corona current in time domain and frequency domain are interpreted qualitatively. Besides, with the aid of analytical expressions of periodic square waves, sound pressure pulses, and corona current pulses, the modulation effects from the AC voltage on the pulse trains are used to interpret the generation of the harmonic components of audible noise and corona current.

  9. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  10. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    NASA Technical Reports Server (NTRS)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  11. Visualization and Semiquantitative Study of the Distribution of Major Components in Wheat Straw in Mesoscopic Scale using Fourier Transform Infrared Microspectroscopic Imaging.

    PubMed

    Yang, Zengling; Mei, Jiaqi; Liu, Zhiqiang; Huang, Guangqun; Huang, Guan; Han, Lujia

    2018-06-19

    Understanding the biochemical heterogeneity of plant tissue linked to crop straw anatomy is attractive to plant researchers and researchers in the field of biomass refinery. This study provides an in situ analysis and semiquantitative visualization of major components distribution in internodal transverse sections of wheat straw based on Fourier transform infrared (FTIR) microspectroscopic imaging, with a fast non-negativity-constrained least squares (fast NNLS) fitting. This paper investigates changes in biochemical components of tissue during stages of elongation, booting, heading, flowering, grain-filling, milk-ripening, dough, and full-ripening. Visualization analysis was carried out with reference spectra for five components (microcrystalline cellulose, xylan, lignin, pectin, and starch) of wheat straw. Our result showed that (a) the cellulose and lignin distribution is consistent with that from tissue-dyeing with safranin O-fast green and (b) the distribution of cellulose, lignin, and starch is consistent with chemical images for characteristic wavelength at 1432, 1507, and 987 cm -1 , respectively, showing no interference from the other components analyzed. With the validation from biochemical images using characteristic wavelength and tissue-dyeing techniques, further semiquantitative analysis in local tissues based on fast NNLS was carried out, and the result showed that (a) the contents of cellulose in various tissues are very different, with most in parenchyma tissue and least in the epidermis and (b) during plant development, the fluctuation of each component in tissues follows nearly the same trend, especially within vascular bundles and parenchyma tissue. Thus, FTIR microspectroscopic imaging combined with suitable chemometric methods can be successfully applied to study chemical distributions within the internodes transverse sections of wheat straw, providing semiquantitative chemical information.

  12. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  13. Argo: an integrative, interactive, text mining-based workbench supporting curation

    PubMed Central

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844

  14. Integrating forest inventory and analysis data into a LIDAR-based carbon monitoring system

    Treesearch

    Kristofer D. Johnson; Richard Birdsey; Andrew O Finley; Anu Swantaran; Ralph Dubayah; Craig Wayson; Rachel Riemann

    2014-01-01

    Forest Inventory and Analysis (FIA) data may be a valuable component of a LIDAR-based carbon monitoring system, but integration of the two observation systems is not without challenges. To explore integration methods, two wall-to-wall LIDAR-derived biomass maps were compared to FIA data at both the plot and county levels in Anne Arundel and Howard Counties in Maryland...

  15. The Prediction of Job Ability Requirements Using Attribute Data Based Upon the Position Analysis Questionnaire (PAQ). Technical Report No. 1.

    ERIC Educational Resources Information Center

    Shaw, James B.; McCormick, Ernest J.

    The study was directed towards the further exploration of the use of attribute ratings as the basis for establishing the job component validity of tests, in particular by using different methods of combining "attribute-based" data with "job analysis" data to form estimates of the aptitude requirements of jobs. The primary focus…

  16. Spectral negentropy based sidebands and demodulation analysis for planet bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2017-12-01

    Planet bearing vibration signals are highly complex due to intricate kinematics (involving both revolution and spinning) and strong multiple modulations (including not only the fault induced amplitude modulation and frequency modulation, but also additional amplitude modulations due to load zone passing, time-varying vibration transfer path, and time-varying angle between the gear pair mesh lines of action and fault impact force vector), leading to difficulty in fault feature extraction. Rolling element bearing fault diagnosis essentially relies on detection of fault induced repetitive impulses carried by resonance vibration, but they are usually contaminated by noise and therefor are hard to be detected. This further adds complexity to planet bearing diagnostics. Spectral negentropy is able to reveal the frequency distribution of repetitive transients, thus providing an approach to identify the optimal frequency band of a filter for separating repetitive impulses. In this paper, we find the informative frequency band (including the center frequency and bandwidth) of bearing fault induced repetitive impulses using the spectral negentropy based infogram. In Fourier spectrum, we identify planet bearing faults according to sideband characteristics around the center frequency. For demodulation analysis, we filter out the sensitive component based on the informative frequency band revealed by the infogram. In amplitude demodulated spectrum (squared envelope spectrum) of the sensitive component, we diagnose planet bearing faults by matching the present peaks with the theoretical fault characteristic frequencies. We further decompose the sensitive component into mono-component intrinsic mode functions (IMFs) to estimate their instantaneous frequencies, and select a sensitive IMF with an instantaneous frequency fluctuating around the center frequency for frequency demodulation analysis. In the frequency demodulated spectrum (Fourier spectrum of instantaneous frequency) of selected IMF, we discern planet bearing fault reasons according to the present peaks. The proposed spectral negentropy infogram based spectrum and demodulation analysis method is illustrated via a numerical simulated signal analysis. Considering the unique load bearing feature of planet bearings, experimental validations under both no-load and loading conditions are done to verify the derived fault symptoms and the proposed method. The localized faults on outer race, rolling element and inner race are successfully diagnosed.

  17. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  18. Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Tang, Xiaoying

    2017-03-01

    Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.

  19. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    PubMed Central

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and installation of the current method provides clinicians and researchers a low cost solution to monitor the progression of and the treatment to PD. In summary, the proposed method provides an alternative to perform gait analysis for patients with PD. PMID:22074315

  20. A new methodology based on functional principal component analysis to study postural stability post-stroke.

    PubMed

    Sánchez-Sánchez, M Luz; Belda-Lois, Juan-Manuel; Mena-Del Horno, Silvia; Viosca-Herrero, Enrique; Igual-Camacho, Celedonia; Gisbert-Morant, Beatriz

    2018-05-05

    A major goal in stroke rehabilitation is the establishment of more effective physical therapy techniques to recover postural stability. Functional Principal Component Analysis provides greater insight into recovery trends. However, when missing values exist, obtaining functional data presents some difficulties. The purpose of this study was to reveal an alternative technique for obtaining the Functional Principal Components without requiring the conversion to functional data beforehand and to investigate this methodology to determine the effect of specific physical therapy techniques in balance recovery trends in elderly subjects with hemiplegia post-stroke. A randomized controlled pilot trial was developed. Thirty inpatients post-stroke were included. Control and target groups were treated with the same conventional physical therapy protocol based on functional criteria, but specific techniques were added to the target group depending on the subjects' functional level. Postural stability during standing was quantified by posturography. The assessments were performed once a month from the moment the participants were able to stand up to six months post-stroke. The target group showed a significant improvement in postural control recovery trend six months after stroke that was not present in the control group. Some of the assessed parameters revealed significant differences between treatment groups (P < 0.05). The proposed methodology allows Functional Principal Component Analysis to be performed when data is scarce. Moreover, it allowed the dynamics of recovery of two different treatment groups to be determined, showing that the techniques added in the target group increased postural stability compared to the base protocol. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Design and Analysis of a Micromechanical Three-Component Force Sensor for Characterizing and Quantifying Surface Roughness

    NASA Astrophysics Data System (ADS)

    Liang, Q.; Wu, W.; Zhang, D.; Wei, B.; Sun, W.; Wang, Y.; Ge, Y.

    2015-10-01

    Roughness, which can represent the trade-off between manufacturing cost and performance of mechanical components, is a critical predictor of cracks, corrosion and fatigue damage. In order to measure polished or super-finished surfaces, a novel touch probe based on three-component force sensor for characterizing and quantifying surface roughness is proposed by using silicon micromachining technology. The sensor design is based on a cross-beam structure, which ensures that the system possesses high sensitivity and low coupling. The results show that the proposed sensor possesses high sensitivity, low coupling error, and temperature compensation function. The proposed system can be used to investigate micromechanical structures with nanometer accuracy.

  2. Analysis of indoor air pollutants checklist using environmetric technique for health risk assessment of sick building complaint in nonindustrial workplace

    PubMed Central

    Syazwan, AI; Rafee, B Mohd; Juahir, Hafizan; Azman, AZF; Nizar, AM; Izwyn, Z; Syahidatussyakirah, K; Muhaimin, AA; Yunos, MA Syafiq; Anita, AR; Hanafiah, J Muhamad; Shaharuddin, MS; Ibthisham, A Mohd; Hasmadi, I Mohd; Azhar, MN Mohamad; Azizan, HS; Zulfadhli, I; Othman, J; Rozalini, M; Kamarul, FT

    2012-01-01

    Purpose To analyze and characterize a multidisciplinary, integrated indoor air quality checklist for evaluating the health risk of building occupants in a nonindustrial workplace setting. Design A cross-sectional study based on a participatory occupational health program conducted by the National Institute of Occupational Safety and Health (Malaysia) and Universiti Putra Malaysia. Method A modified version of the indoor environmental checklist published by the Department of Occupational Health and Safety, based on the literature and discussion with occupational health and safety professionals, was used in the evaluation process. Summated scores were given according to the cluster analysis and principal component analysis in the characterization of risk. Environmetric techniques was used to classify the risk of variables in the checklist. Identification of the possible source of item pollutants was also evaluated from a semiquantitative approach. Result Hierarchical agglomerative cluster analysis resulted in the grouping of factorial components into three clusters (high complaint, moderate-high complaint, moderate complaint), which were further analyzed by discriminant analysis. From this, 15 major variables that influence indoor air quality were determined. Principal component analysis of each cluster revealed that the main factors influencing the high complaint group were fungal-related problems, chemical indoor dispersion, detergent, renovation, thermal comfort, and location of fresh air intake. The moderate-high complaint group showed significant high loading on ventilation, air filters, and smoking-related activities. The moderate complaint group showed high loading on dampness, odor, and thermal comfort. Conclusion This semiquantitative assessment, which graded risk from low to high based on the intensity of the problem, shows promising and reliable results. It should be used as an important tool in the preliminary assessment of indoor air quality and as a categorizing method for further IAQ investigations and complaints procedures. PMID:23055779

  3. Analysis of indoor air pollutants checklist using environmetric technique for health risk assessment of sick building complaint in nonindustrial workplace.

    PubMed

    Syazwan, Ai; Rafee, B Mohd; Juahir, Hafizan; Azman, Azf; Nizar, Am; Izwyn, Z; Syahidatussyakirah, K; Muhaimin, Aa; Yunos, Ma Syafiq; Anita, Ar; Hanafiah, J Muhamad; Shaharuddin, Ms; Ibthisham, A Mohd; Hasmadi, I Mohd; Azhar, Mn Mohamad; Azizan, Hs; Zulfadhli, I; Othman, J; Rozalini, M; Kamarul, Ft

    2012-01-01

    To analyze and characterize a multidisciplinary, integrated indoor air quality checklist for evaluating the health risk of building occupants in a nonindustrial workplace setting. A cross-sectional study based on a participatory occupational health program conducted by the National Institute of Occupational Safety and Health (Malaysia) and Universiti Putra Malaysia. A modified version of the indoor environmental checklist published by the Department of Occupational Health and Safety, based on the literature and discussion with occupational health and safety professionals, was used in the evaluation process. Summated scores were given according to the cluster analysis and principal component analysis in the characterization of risk. Environmetric techniques was used to classify the risk of variables in the checklist. Identification of the possible source of item pollutants was also evaluated from a semiquantitative approach. Hierarchical agglomerative cluster analysis resulted in the grouping of factorial components into three clusters (high complaint, moderate-high complaint, moderate complaint), which were further analyzed by discriminant analysis. From this, 15 major variables that influence indoor air quality were determined. Principal component analysis of each cluster revealed that the main factors influencing the high complaint group were fungal-related problems, chemical indoor dispersion, detergent, renovation, thermal comfort, and location of fresh air intake. The moderate-high complaint group showed significant high loading on ventilation, air filters, and smoking-related activities. The moderate complaint group showed high loading on dampness, odor, and thermal comfort. This semiquantitative assessment, which graded risk from low to high based on the intensity of the problem, shows promising and reliable results. It should be used as an important tool in the preliminary assessment of indoor air quality and as a categorizing method for further IAQ investigations and complaints procedures.

  4. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  5. On 3-D inelastic analysis methods for hot section components (base program)

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1986-01-01

    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.

  6. Sublaminate analysis of interlaminar fracture in composites

    NASA Technical Reports Server (NTRS)

    Armanios, E. A.; Rehfield, L. W.

    1986-01-01

    A simple analysis method based upon a transverse shear deformation theory and a sublaminate approach is utilized to analyze a mixed-mode edge delamination specimen. The analysis provides closed form expressions for the interlaminar shear stresses ahead of the crack, the total energy release rate, and the energy release rate components. The parameters controlling the behavior are identified. The effect of specimen stacking sequence and delamination interface on the strain energy release rate components is investigated. Results are compared with a finite element simulation for reference. The simple nature of the method makes it suitable for preliminary design analyses which require a large number of configurations to be evaluated quickly and economically.

  7. Anthropometric profile of combat athletes via multivariate analysis.

    PubMed

    Burdukiewicz, Anna; Pietraszewska, Jadwiga; Stachoń, Aleksandra; Andrzejewska, Justyna

    2017-11-07

    Athletic success is a complex phenotype influenced by multiple factors, from sport-specific skills to anthropometric characteristics. Considering the latter, the literature has repeatedly indicated that athletes possess distinct physical characteristics depending on the practiced discipline. The aim of the present study was to apply univariate and multivariate methods to assess a wide range of morphometric and somatotypic characteristics in male combat athletes. Biometric data were obtained from 206 male university-level practitioners of judo, jiu-jitsu, karate, kickboxing, taekwondo, and wrestling. Measures included height- and length-based variables, breadths, circumferences, and skinfolds. Body proportions and somatotype, using Sheldon's method of somatotopy as modified by Heath and Carter, were then determined. Body fat percentage was assessed by bioelectrical impedance analysis using tetrapolar hand-to-foot electrodes. Data were subjected to a wide array of statistical analysis. The results show between-group differences in the magnitudes of the analyzed characteristics. While mesomorphy was the dominant component of each group somatotype, enhanced ectomorphy was observed in those disciplines that require a high level of agility. Principal component analysis reduced the multivariate dimensionality of the data to three components (characterizing body size, height-based measures, and the anthropometric structure of the upper extremities) that explained the majority of data variance. The development of a sport-specific anthropometric profile via height- and mass-based and morphometric and somatotypic variables can aid in the design of training protocols and the identification of athlete markers as well as serve as a diagnostic criterion in predicting combat athlete performance.

  8. Dimensionality and summary measures of the SF-36 v1.6: comparison of scale- and item-based approach across ECRHS II adults population.

    PubMed

    Grassi, Mario; Nucera, Andrea

    2010-01-01

    The objective of this study was twofold: 1) to confirm the hypothetical eight scales and two-component summaries of the questionnaire Short Form 36 Health Survey (SF-36), and 2) to evaluate the performance of two alternative measures to the original physical component summary (PCS) and mental component summary (MCS). We performed principal component analysis (PCA) based on 35 items, after optimal scaling via multiple correspondence analysis (MCA), and subsequently on eight scales, after standard summative scoring. Item-based summary measures were planned. Data from the European Community Respiratory Health Survey II follow-up of 8854 subjects from 25 centers were analyzed to cross-validate the original and the novel PCS and MCS. Overall, the scale- and item-based comparison indicated that the SF-36 scales and summaries meet the supposed dimensionality. However, vitality, social functioning, and general health items did not fit data optimally. The novel measures, derived a posteriori by unit-rule from an oblique (correlated) MCA/PCA solution, are simple item sums or weighted scale sums where the weights are the raw scale ranges. These item-based scores yielded consistent scale-summary results for outliers profiles, with an expected known-group differences validity. We were able to confirm the hypothesized dimensionality of eight scales and two summaries of the SF-36. The alternative scoring reaches at least the same required standards of the original scoring. In addition, it can reduce the item-scale inconsistencies without loss of predictive validity.

  9. Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test.

    PubMed

    Palmerini, Luca; Mellone, Sabato; Rocchi, Laura; Chiari, Lorenzo

    2011-01-01

    The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.

  10. A new statistical PCA-ICA algorithm for location of R-peaks in ECG.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-16

    The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.

  11. A comparative study of volatile components in Dianhong teas from fresh leaves of four tea cultivars by using chromatography-mass spectrometry, multivariate data analysis, and descriptive sensory analysis.

    PubMed

    Wang, Chao; Zhang, Chenxia; Kong, Yawen; Peng, Xiaopei; Li, Changwen; Liu, Shunhang; Du, Liping; Xiao, Dongguang; Xu, Yongquan

    2017-10-01

    Dianhong teas produced from fresh leaves of different tea cultivars (YK is Yunkang No. 10, XY is Xueya 100, CY is Changyebaihao, SS is Shishengmiao), were compared in terms of volatile compounds and descriptive sensory analysis. A total of 73 volatile compounds in 16 tea samples were tentatively identified. YK, XY, CY, and SS contained 55, 53, 49, and 51 volatile compounds, respectively. Partial least squares-discriminant analysis (PLS-DA) and hierarchical cluster analysis (HCA) were used to classify the samples, and 40 key components were selected based on variable importance in the projection. Moreover, 11 flavor attributes, namely, floral, fruity, grass/green, woody, sweet, roasty, caramel, mellow and thick, bitter, astringent, and sweet aftertaste were identified through descriptive sensory analysis (DSA). In generally, innate differences among the tea varieties significantly affected the intensities of most of the key sensory attributes of Dianhong teas possibly because of the different amounts of aroma-active and taste components in Dianhong teas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  13. Reference Models for Structural Technology Assessment and Weight Estimation

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd

    2005-01-01

    Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.

  14. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.

  15. Analysis of the symbiotic star AG Pegasi

    NASA Technical Reports Server (NTRS)

    Keyes, C. D.; Plavec, M. J.

    1981-01-01

    High and low dispersion IUE data are analyzed in conjunction with coincident ground based spectrophotometric scans and supplementary infrared photometry of the symbiotic object AG Pegasi. The IUE observations yield an improved value of E(B-V) = 0.12. The two stellar components are easily recognized in the spectra. The cool component may be an M1.7 III star and the hot component appears to have T (sub eff) of approximately 30000 K. The emission lines observed in the ultraviolet indicate two or three distince emitting regions. Nebular component ultraviolet intercombination lines suggest an electron density of several times 10 billion/cu cm.

  16. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  17. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  18. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC

    PubMed Central

    López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2018-01-01

    The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials. PMID:29401725

  19. Embedding Dimension Selection for Adaptive Singular Spectrum Analysis of EEG Signal.

    PubMed

    Xu, Shanzhi; Hu, Hai; Ji, Linhong; Wang, Peng

    2018-02-26

    The recorded electroencephalography (EEG) signal is often contaminated with different kinds of artifacts and noise. Singular spectrum analysis (SSA) is a powerful tool for extracting the brain rhythm from a noisy EEG signal. By analyzing the frequency characteristics of the reconstructed component (RC) and the change rate in the trace of the Toeplitz matrix, it is demonstrated that the embedding dimension is related to the frequency bandwidth of each reconstructed component, in consistence with the component mixing in the singular value decomposition step. A method for selecting the embedding dimension is thereby proposed and verified by simulated EEG signal based on the Markov Process Amplitude (MPA) EEG Model. Real EEG signal is also collected from the experimental subjects under both eyes-open and eyes-closed conditions. The experimental results show that based on the embedding dimension selection method, the alpha rhythm can be extracted from the real EEG signal by the adaptive SSA, which can be effectively utilized to distinguish between the eyes-open and eyes-closed states.

  20. Respiratory motion correction in dynamic MRI using robust data decomposition registration - application to DCE-MRI.

    PubMed

    Hamy, Valentin; Dikaios, Nikolaos; Punwani, Shonit; Melbourne, Andrew; Latifoltojar, Arash; Makanyanga, Jesica; Chouhan, Manil; Helbren, Emma; Menys, Alex; Taylor, Stuart; Atkinson, David

    2014-02-01

    Motion correction in Dynamic Contrast Enhanced (DCE-) MRI is challenging because rapid intensity changes can compromise common (intensity based) registration algorithms. In this study we introduce a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component. This allows robust separation of motion components that can be registered, from intensity variations that are left unchanged. This Robust Data Decomposition Registration (RDDR) is demonstrated on both simulated and a wide range of clinical data. Robustness to different types of motion and breathing choices during acquisition is demonstrated for a variety of imaged organs including liver, small bowel and prostate. The analysis of clinically relevant regions of interest showed both a decrease of error (15-62% reduction following registration) in tissue time-intensity curves and improved areas under the curve (AUC60) at early enhancement. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  2. An analytics of electricity consumption characteristics based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Feng, Junshu

    2018-02-01

    Abstract . More detailed analysis of the electricity consumption characteristics can make demand side management (DSM) much more targeted. In this paper, an analytics of electricity consumption characteristics based on principal component analysis (PCA) is given, which the PCA method can be used in to extract the main typical characteristics of electricity consumers. Then, electricity consumption characteristics matrix is designed, which can make a comparison of different typical electricity consumption characteristics between different types of consumers, such as industrial consumers, commercial consumers and residents. In our case study, the electricity consumption has been mainly divided into four characteristics: extreme peak using, peak using, peak-shifting using and others. Moreover, it has been found that industrial consumers shift their peak load often, meanwhile commercial and residential consumers have more peak-time consumption. The conclusions can provide decision support of DSM for the government and power providers.

  3. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  4. Gear fault diagnosis based on the structured sparsity time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong

    2018-03-01

    Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.

  5. Determination of mechanical loading components of the equine metacarpus from measurements of strain during walking.

    PubMed

    Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S

    2006-08-01

    The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.

  6. Quantification method for the appearance of melanin pigmentation using independent component analysis

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi

    2005-04-01

    In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.

  7. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  8. Application of the principles of evidence-based practice in decision making among senior management in Nova Scotia's addiction services agencies.

    PubMed

    Murphy, Matthew; MacCarthy, M Jayne; McAllister, Lynda; Gilbert, Robert

    2014-12-05

    Competency profiles for occupational clusters within Canada's substance abuse workforce (SAW) define the need for skill and knowledge in evidence-based practice (EBP) across all its members. Members of the Senior Management occupational cluster hold ultimate responsibility for decisions made within addiction services agencies and therefore must possess the highest level of proficiency in EBP. The objective of this study was to assess the knowledge of the principles of EBP, and use of the components of the evidence-based decision making (EBDM) process in members of this occupational cluster from selected addiction services agencies in Nova Scotia. A convenience sampling method was used to recruit participants from addiction services agencies. Semi-structured qualitative interviews were conducted with eighteen Senior Management. The interviews were audio-recorded, transcribed verbatim and checked by the participants. Interview transcripts were coded and analyzed for themes using content analysis and assisted by qualitative data analysis software (NVivo 9.0). Data analysis revealed four main themes: 1) Senior Management believe that addictions services agencies are evidence-based; 2) Consensus-based decision making is the norm; 3) Senior Management understand the principles of EBP and; 4) Senior Management do not themselves use all components of the EBDM process when making decisions, oftentimes delegating components of this process to decision support staff. Senior Management possess an understanding of the principles of EBP, however, when making decisions they often delegate components of the EBDM process to decision support staff. Decision support staff are not defined as an occupational cluster in Canada's SAW and have not been ascribed a competency profile. As such, there is no guarantee that this group possesses competency in EBDM. There is a need to advocate for the development of a defined occupational cluster and associated competency profile for this critical group.

  9. Use of Raman microscopy and band-target entropy minimization analysis to identify dyes in a commercial stamp. Implications for authentication and counterfeit detection.

    PubMed

    Widjaja, Effendi; Garland, Marc

    2008-02-01

    Raman microscopy was used in mapping mode to collect more than 1000 spectra in a 100 microm x 100 microm area from a commercial stamp. Band-target entropy minimization (BTEM) was then employed to unmix the mixture spectra in order to extract the pure component spectra of the samples. Three pure component spectral patterns with good signal-to-noise ratios were recovered, and their spatial distributions were determined. The three pure component spectral patterns were then identified as copper phthalocyanine blue, calcite-like material, and yellow organic dye material by comparison to known spectral libraries. The present investigation, consisting of (1) advanced curve resolution (blind-source separation) followed by (2) spectral data base matching, readily suggests extensions to authenticity and counterfeit studies of other types of commercial objects. The presence or absence of specific observable components form the basis for assessment. The present spectral analysis (BTEM) is applicable to highly overlapping spectral information. Since a priori information such as the number of components present and spectral libraries are not needed in BTEM, and since minor signals arising from trace components can be reconstructed, this analysis offers a robust approach to a wide variety of material problems involving authenticity and counterfeit issues.

  10. Prevalence of Metabolic Syndrome and Its Components among Japanese Workers by Clustered Business Category.

    PubMed

    Hidaka, Tomoo; Hayakawa, Takehito; Kakamu, Takeyasu; Kumagai, Tomohiro; Hiruta, Yuhei; Hata, Junko; Tsuji, Masayoshi; Fukushima, Tetsuhito

    2016-01-01

    The present study was a cross-sectional study conducted to reveal the prevalence of metabolic syndrome and its components and describe the features of such prevalence among Japanese workers by clustered business category using big data. The data of approximately 120,000 workers were obtained from a national representative insurance organization, and the study analyzed the health checkup and questionnaire results according to the field of business of each subject. Abnormalities found during the checkups such as excessive waist circumference, hypertension or glucose intolerance, and metabolic syndrome, were recorded. All subjects were classified by business field into 18 categories based on The North American Industry Classification System. Based on the criteria of the Japanese Committee for the Diagnostic Criteria of Metabolic Syndrome, the standardized prevalence ratio (SPR) of metabolic syndrome and its components by business category was calculated, and the 95% confidence interval of the SPR was computed. Hierarchical cluster analysis was then performed based on the SPR of metabolic syndrome components, and the 18 business categories were classified into three clusters for both males and females. The following business categories were at significantly high risk of metabolic syndrome: among males, Construction, Transportation, Professional Services, and Cooperative Association; and among females, Health Care and Cooperative Association. The results of the cluster analysis indicated one cluster for each gender with a higher prevalence of metabolic syndrome components; among males, a cluster consisting of Manufacturing, Transportation, Finance, and Cooperative Association, and among females, a cluster consisting of Mining, Transportation, Finance, Accommodation, and Cooperative Association. These findings reveal that, when providing health guidance and support regarding metabolic syndrome, consideration must be given to its components and the variety of its prevalence rates by business category and gender.

  11. Prevalence of Metabolic Syndrome and Its Components among Japanese Workers by Clustered Business Category

    PubMed Central

    Hidaka, Tomoo; Hayakawa, Takehito; Kakamu, Takeyasu; Kumagai, Tomohiro; Hiruta, Yuhei; Hata, Junko; Tsuji, Masayoshi; Fukushima, Tetsuhito

    2016-01-01

    The present study was a cross-sectional study conducted to reveal the prevalence of metabolic syndrome and its components and describe the features of such prevalence among Japanese workers by clustered business category using big data. The data of approximately 120,000 workers were obtained from a national representative insurance organization, and the study analyzed the health checkup and questionnaire results according to the field of business of each subject. Abnormalities found during the checkups such as excessive waist circumference, hypertension or glucose intolerance, and metabolic syndrome, were recorded. All subjects were classified by business field into 18 categories based on The North American Industry Classification System. Based on the criteria of the Japanese Committee for the Diagnostic Criteria of Metabolic Syndrome, the standardized prevalence ratio (SPR) of metabolic syndrome and its components by business category was calculated, and the 95% confidence interval of the SPR was computed. Hierarchical cluster analysis was then performed based on the SPR of metabolic syndrome components, and the 18 business categories were classified into three clusters for both males and females. The following business categories were at significantly high risk of metabolic syndrome: among males, Construction, Transportation, Professional Services, and Cooperative Association; and among females, Health Care and Cooperative Association. The results of the cluster analysis indicated one cluster for each gender with a higher prevalence of metabolic syndrome components; among males, a cluster consisting of Manufacturing, Transportation, Finance, and Cooperative Association, and among females, a cluster consisting of Mining, Transportation, Finance, Accommodation, and Cooperative Association. These findings reveal that, when providing health guidance and support regarding metabolic syndrome, consideration must be given to its components and the variety of its prevalence rates by business category and gender. PMID:27082961

  12. [Evaluate drug interaction of multi-components in Morus alba leaves based on α-glucosidase inhibitory activity].

    PubMed

    Ji, Tao; Su, Shu-Lan; Guo, Sheng; Qian, Da-Wei; Ouyang, Zhen; Duan, Jin-Ao

    2016-06-01

    Column chromatography was used for enrichment and separation of flavonoids, alkaloids and polysaccharides from the extracts of Morus alba leaves; glucose oxidase method was used with sucrose as the substrate to evaluate the multi-components of M. alba leaves in α-glucosidase inhibitory models; isobole method, Chou-Talalay combination index analysis and isobolographic analysis were used to evaluate the interaction effects and dose-effect characteristics of two components, providing scientific basis for revealing the hpyerglycemic mechanism of M. alba leaves. The components analysis showed that flavonoid content was 5.3%; organic phenolic acids content was 10.8%; DNJ content was 39.4%; and polysaccharide content was 18.9%. Activity evaluation results demonstrated that flavonoids, alkaloids and polysaccharides of M. alba leaves had significant inhibitory effects on α-glucosidase, and the inhibitory rate was increased with the increasing concentration. Alkaloids showed most significant inhibitory effects among these three components. Both compatibility of alkaloids and flavonoids, and the compatibility of alkaloids and polysaccharides demonstrated synergistic effects, but the compatibility of flavonoids and polysaccharides showed no obvious synergistic effects. The results have confirmed the interaction of multi-components from M. alba leaves to regulate blood sugar, and provided scientific basis for revealing hpyerglycemic effectiveness and mechanism of the multi-components from M. alba leaves. Copyright© by the Chinese Pharmaceutical Association.

  13. An Exploratory Study on Using Principal-Component Analysis and Confirmatory Factor Analysis to Identify Bolt-On Dimensions: The EQ-5D Case Study.

    PubMed

    Finch, Aureliano Paolo; Brazier, John Edward; Mukuria, Clara; Bjorner, Jakob Bue

    2017-12-01

    Generic preference-based measures such as the EuroQol five-dimensional questionnaire (EQ-5D) are used in economic evaluation, but may not be appropriate for all conditions. When this happens, a possible solution is adding bolt-ons to expand their descriptive systems. Using review-based methods, studies published to date claimed the relevance of bolt-ons in the presence of poor psychometric results. This approach does not identify the specific dimensions missing from the Generic preference-based measure core descriptive system, and is inappropriate for identifying dimensions that might improve the measure generically. This study explores the use of principal-component analysis (PCA) and confirmatory factor analysis (CFA) for bolt-on identification in the EQ-5D. Data were drawn from the international Multi-Instrument Comparison study, which is an online survey on health and well-being measures in five countries. Analysis was based on a pool of 92 items from nine instruments. Initial content analysis provided a theoretical framework for PCA results interpretation and CFA model development. PCA was used to investigate the underlining dimensional structure and whether EQ-5D items were represented in the identified constructs. CFA was used to confirm the structure. CFA was cross-validated in random halves of the sample. PCA suggested a nine-component solution, which was confirmed by CFA. This included psychological symptoms, physical functioning, and pain, which were covered by the EQ-5D, and satisfaction, speech/cognition,relationships, hearing, vision, and energy/sleep which were not. These latter factors may represent relevant candidate bolt-ons. PCA and CFA appear useful methods for identifying potential bolt-ons dimensions for an instrument such as the EQ-5D. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Do ewes born with a male co-twin have greater longevity with lambing over time?

    USDA-ARS?s Scientific Manuscript database

    Based on a recent analysis of historical records, ewes born co-twin to a ram had greater lifetime reproductive performance than ewes born co-twin to a ewe. We are interested in determining what component(s) of lifetime reproductive performance may be associated with a ewe’s co-twin sex. As an initi...

  15. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  16. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  17. Algorithm based on the short-term Rényi entropy and IF estimation for noisy EEG signals analysis.

    PubMed

    Lerga, Jonatan; Saulig, Nicoletta; Mozetič, Vladimir

    2017-01-01

    Stochastic electroencephalogram (EEG) signals are known to be nonstationary and often multicomponential. Detecting and extracting their components may help clinicians to localize brain neurological dysfunctionalities for patients with motor control disorders due to the fact that movement-related cortical activities are reflected in spectral EEG changes. A new algorithm for EEG signal components detection from its time-frequency distribution (TFD) has been proposed in this paper. The algorithm utilizes the modification of the Rényi entropy-based technique for number of components estimation, called short-term Rényi entropy (STRE), and upgraded by an iterative algorithm which was shown to enhance existing approaches. Combined with instantaneous frequency (IF) estimation, the proposed method was applied to EEG signal analysis both in noise-free and noisy environments for limb movements EEG signals, and was shown to be an efficient technique providing spectral description of brain activities at each electrode location up to moderate additive noise levels. Furthermore, the obtained information concerning the number of EEG signal components and their IFs show potentials to enhance diagnostics and treatment of neurological disorders for patients with motor control illnesses. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Tailored multivariate analysis for modulated enhanced diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caliandro, Rocco; Guccione, Pietro; Nico, Giovanni

    2015-10-21

    Modulated enhanced diffraction (MED) is a technique allowing the dynamic structural characterization of crystalline materials subjected to an external stimulus, which is particularly suited forin situandoperandostructural investigations at synchrotron sources. Contributions from the (active) part of the crystal system that varies synchronously with the stimulus can be extracted by an offline analysis, which can only be applied in the case of periodic stimuli and linear system responses. In this paper a new decomposition approach based on multivariate analysis is proposed. The standard principal component analysis (PCA) is adapted to treat MED data: specific figures of merit based on their scoresmore » and loadings are found, and the directions of the principal components obtained by PCA are modified to maximize such figures of merit. As a result, a general method to decompose MED data, called optimum constrained components rotation (OCCR), is developed, which produces very precise results on simulated data, even in the case of nonperiodic stimuli and/or nonlinear responses. The multivariate analysis approach is able to supply in one shot both the diffraction pattern related to the active atoms (through the OCCR loadings) and the time dependence of the system response (through the OCCR scores). When applied to real data, OCCR was able to supply only the latter information, as the former was hindered by changes in abundances of different crystal phases, which occurred besides structural variations in the specific case considered. To develop a decomposition procedure able to cope with this combined effect represents the next challenge in MED analysis.« less

  19. Differentiation of commercial fuels based on polar components using negative electrospray ionization/mass spectrometry

    USGS Publications Warehouse

    Rostad, C.E.

    2006-01-01

    Polar components in fuels may enable differentiation between fuel types or commercial fuel sources. A range of commercial fuels from numerous sources were analyzed by flow injection analysis/electrospray ionization/mass spectrometry without extensive sample preparation, separation, or chromatography. This technique enabled screening for unique polar components at parts per million levels in commercial hydrocarbon products, including a range of products from a variety of commercial sources and locations. Because these polar compounds are unique in different fuels, their presence may provide source information on hydrocarbons released into the environment. This analysis was then applied to mixtures of various products, as might be found in accidental releases into the environment. Copyright ?? Taylor & Francis Group, LLC.

  20. Cell module and fuel conditioner development

    NASA Technical Reports Server (NTRS)

    Feret, J. M.

    1982-01-01

    The efforts performed to develop a phosphoric acid fuel cell (PAFC) stack design having a 10 kW power rating for operation at higher than atmospheric pressure based on the existing Mark II design configuration are described. The work involves: (1) Performance of pertinent functional analysis, trade studies and thermodynamic cycle analysis for requirements definition and system operating parameter selection purposes, (2) characterization of fuel cell materials and components, and performance testing and evaluation of the repeating electrode components, (3) establishment of the state-of-the-art manufacturing technology for all fuel cell components at Westinghouse and the fabrication of short stacks of various sites, and (4) development of a 10 kW PAFC stack design for higher pressure operation utilizing the top down systems engineering approach.

  1. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  2. Esophageal cancer detection based on tissue surface-enhanced Raman spectroscopy and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan

    2013-01-01

    The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.

  3. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  4. Improved Holistic Analysis of Rayleigh Waves for Single- and Multi-Offset Data: Joint Inversion of Rayleigh-Wave Particle Motion and Vertical- and Radial-Component Velocity Spectra

    NASA Astrophysics Data System (ADS)

    Dal Moro, Giancarlo; Moustafa, Sayed S. R.; Al-Arifi, Nassir S.

    2018-01-01

    Rayleigh waves often propagate according to complex mode excitation so that the proper identification and separation of specific modes can be quite difficult or, in some cases, just impossible. Furthermore, the analysis of a single component (i.e., an inversion procedure based on just one objective function) necessarily prevents solving the problems related to the non-uniqueness of the solution. To overcome these issues and define a holistic analysis of Rayleigh waves, we implemented a procedure to acquire data that are useful to define and efficiently invert the three objective functions defined from the three following "objects": the velocity spectra of the vertical- and radial-components and the Rayleigh-wave particle motion (RPM) frequency-offset data. Two possible implementations are presented. In the first case we consider classical multi-offset (and multi-component) data, while in a second possible approach we exploit the data recorded by a single three-component geophone at a fixed offset from the source. Given the simple field procedures, the method could be particularly useful for the unambiguous geotechnical exploration of large areas, where more complex acquisition procedures, based on the joint acquisition of Rayleigh and Love waves, would not be economically viable. After illustrating the different kinds of data acquisition and the data processing, the results of the proposed methodology are illustrated in a case study. Finally, a series of theoretical and practical aspects are discussed to clarify some issues involved in the overall procedure (data acquisition and processing).

  5. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  6. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  7. HYSEP: A Computer Program for Streamflow Hydrograph Separation and Analysis

    USGS Publications Warehouse

    Sloto, Ronald A.; Crouse, Michele Y.

    1996-01-01

    HYSEP is a computer program that can be used to separate a streamflow hydrograph into base-flow and surface-runoff components. The base-flow component has traditionally been associated with ground-water discharge and the surface-runoff component with precipitation that enters the stream as overland runoff. HYSEP includes three methods of hydrograph separation that are referred to in the literature as the fixed interval, sliding-interval, and local-minimum methods. The program also describes the frequency and duration of measured streamflow and computed base flow and surface runoff. Daily mean stream discharge is used as input to the program in either an American Standard Code for Information Interchange (ASCII) or binary format. Output from the program includes table,s graphs, and data files. Graphical output may be plotted on the computer screen or output to a printer, plotter, or metafile.

  8. Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS

    NASA Astrophysics Data System (ADS)

    Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.

    2012-07-01

    A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.

  9. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    NASA Astrophysics Data System (ADS)

    de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru

    2005-10-01

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  10. [Extraction of evoked related potentials by using the combination of independent component analysis and wavelet analysis].

    PubMed

    Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua

    2010-08-01

    In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.

  11. A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package

    ERIC Educational Resources Information Center

    Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.

    2013-01-01

    DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…

  12. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  13. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  14. Lippia origanoides chemotype differentiation based on essential oil GC-MS and principal component analysis.

    PubMed

    Stashenko, Elena E; Martínez, Jairo R; Ruíz, Carlos A; Arias, Ginna; Durán, Camilo; Salgar, William; Cala, Mónica

    2010-01-01

    Chromatographic (GC/flame ionization detection, GC/MS) and statistical analyses were applied to the study of essential oils and extracts obtained from flowers, leaves, and stems of Lippia origanoides plants, growing wild in different Colombian regions. Retention indices, mass spectra, and standard substances were used in the identification of 139 substances detected in these essential oils and extracts. Principal component analysis allowed L. origanoides classification into three chemotypes, characterized according to their essential oil major components. Alpha- and beta-phellandrenes, p-cymene, and limonene distinguished chemotype A; carvacrol and thymol were the distinctive major components of chemotypes B and C, respectively. Pinocembrin (5,7-dihydroxyflavanone) was found in L. origanoides chemotype A supercritical fluid (CO(2)) extract at a concentration of 0.83+/-0.03 mg/g of dry plant material, which makes this plant an interesting source of an important bioactive flavanone with diverse potential applications in cosmetic, food, and pharmaceutical products.

  15. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    PubMed

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  16. Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems

    NASA Astrophysics Data System (ADS)

    Mullen, Douglas Gurnett

    Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules can be attached and has the potential to dramatically improve platform efficacy. An additional investigation of reproducibility challenges for current dendrimer-based platform designs is also described. The mass transport quality during the partial acetylation reaction of the dendrimer was found to have a major impact on subsequent dendrimer-ligand distributions that cannot be detected by standard analytical techniques. Consequently, this reaction should be eliminated from the platform design. Finally, optimized protocols for purification and characterization of PAMAM dendrimer were detailed.

  17. Assessing School Work Culture: A Higher-Order Analysis and Strategy.

    ERIC Educational Resources Information Center

    Johnson, William L.; Johnson, Annabel M.; Zimmerman, Kurt J.

    This paper reviews a work culture productivity model and reports the development of a work culture instrument based on the culture productivity model. Higher order principal components analysis was used to assess work culture, and a third-order factor analysis shows how the first-order factors group into higher-order factors. The school work…

  18. Teaching Data Analysis with Interactive Visual Narratives

    ERIC Educational Resources Information Center

    Saundage, Dilal; Cybulski, Jacob L.; Keller, Susan; Dharmasena, Lasitha

    2016-01-01

    Data analysis is a major part of business analytics (BA), which refers to the skills, methods, and technologies that enable managers to make swift, quality decisions based on large amounts of data. BA has become a major component of Information Systems (IS) courses all over the world. The challenge for IS educators is to teach data analysis--the…

  19. Comparative Analysis.

    DTIC Science & Technology

    1987-11-01

    differential qualita- tive (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent...solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based...comparative analysis as an important component; the explanation is used in many different ways. * One way method of automated design is the principlvd

  20. Analysis of an algae-based CELSS. II - Options and weight analysis

    NASA Technical Reports Server (NTRS)

    Holtzapple, Mark T.; Little, Frank E.; Moses, William M.; Patterson, C. O.

    1989-01-01

    Life support components are evaluated for application to an idealized closed life support system which includes an algal reactor for food production. Weight-based trade studies are reported as 'break-even' time for replacing food stores with a regenerative bioreactor. It is concluded that closure of the life support gases (oxygen recovery) depends on the carbon dioxide reduction chemistry and that an algae-based food production can provide an attractive alternative to re-supply for longer duration missions.

  1. Analysis of an algae-based CELSS. Part 2: options and weight analysis

    NASA Technical Reports Server (NTRS)

    Holtzapple, M. T.; Little, F. E.; Moses, W. M.; Patterson, C. O.

    1989-01-01

    Life support components are evaluated for application to an idealized closed life support system which includes an algal reactor for food production. Weight-based trade studies are reported as "break-even" time for replacing food stores with a regenerative bioreactor. It is concluded that closure of the life support gases (oxygen recovery) depends on the carbon dioxide reduction chemistry and that an algae-based food production can provide an attractive alternative to re-supply for longer duration missions.

  2. Modality-Spanning Deficits in Attention-Deficit/Hyperactivity Disorder in Functional Networks, Gray Matter, and White Matter

    PubMed Central

    Kessler, Daniel; Angstadt, Michael; Welsh, Robert C.

    2014-01-01

    Previous neuroimaging investigations in attention-deficit/hyperactivity disorder (ADHD) have separately identified distributed structural and functional deficits, but interconnections between these deficits have not been explored. To unite these modalities in a common model, we used joint independent component analysis, a multivariate, multimodal method that identifies cohesive components that span modalities. Based on recent network models of ADHD, we hypothesized that altered relationships between large-scale networks, in particular, default mode network (DMN) and task-positive networks (TPNs), would co-occur with structural abnormalities in cognitive regulation regions. For 756 human participants in the ADHD-200 sample, we produced gray and white matter volume maps with voxel-based morphometry, as well as whole-brain functional connectomes. Joint independent component analysis was performed, and the resulting transmodal components were tested for differential expression in ADHD versus healthy controls. Four components showed greater expression in ADHD. Consistent with our a priori hypothesis, we observed reduced DMN-TPN segregation co-occurring with structural abnormalities in dorsolateral prefrontal cortex and anterior cingulate cortex, two important cognitive control regions. We also observed altered intranetwork connectivity in DMN, dorsal attention network, and visual network, with co-occurring distributed structural deficits. There was strong evidence of spatial correspondence across modalities: For all four components, the impact of the respective component on gray matter at a region strongly predicted the impact on functional connectivity at that region. Overall, our results demonstrate that ADHD involves multiple, cohesive modality spanning deficits, each one of which exhibits strong spatial overlap in the pattern of structural and functional alterations. PMID:25505309

  3. Establishing ¹H nuclear magnetic resonance based metabonomics fingerprinting profile for spinal cord injury: a pilot study.

    PubMed

    Jiang, Hua; Peng, Jin; Zhou, Zhi-yuan; Duan, Yu; Chen, Wei; Cai, Bin; Yang, Hao; Zhang, Wei

    2010-09-01

    Spinal cord injury (SCI) is a complex trauma that consists of multiple pathological mechanisms involving cytotoxic, oxidation stress and immune-endocrine. This study aimed to establish plasma metabonomics fingerprinting atlas for SCI using (1)H nuclear magnetic resonance (NMR) based metabonomics methodology and principal component analysis techniques. Nine Sprague-Dawley (SD) male rats were randomly divided into SCI, normal and sham-operation control groups. Plasma samples were collected for (1)H NMR spectroscopy 3 days after operation. The NMR data were analyzed using principal component analysis technique with Matlab software. Metabonomics analysis was able to distinguish the three groups (SCI, normal control, sham-operation). The fingerprinting atlas indicated that, compared with those without SCI, the SCI group demonstrated the following characteristics with regard to second principal component: it is made up of fatty acids, myc-inositol, arginine, very low-density lipoprotein (VLDL), low-density lipoprotein (LDL), triglyceride (TG), glucose, and 3-methyl-histamine. The data indicated that SCI results in several significant changes in plasma metabolism early on and that a metabonomics approach based on (1)H NMR spectroscopy can provide a metabolic profile comprising several metabolite classes and allow for relative quantification of such changes. The results also provided support for further development and application of metabonomics technologies for studying SCI and for the utilization of multivariate models for classifying the extent of trauma within an individual.

  4. Comparative Analysis of Metabolic Syndrome Components in over 15,000 African Americans Identifies Pleiotropic Variants: Results from the PAGE Study

    PubMed Central

    Carty, Cara L.; Bhattacharjee, Samsiddhi; Haessler, Jeff; Cheng, Iona; Hindorff, Lucia A.; Aroda, Vanita; Carlson, Christopher S.; Hsu, Chun-Nan; Wilkens, Lynne; Liu, Simin; Selvin, Elizabeth; Jackson, Rebecca; North, Kari E.; Peters, Ulrike; Pankow, James S.; Chatterjee, Nilanjan; Kooperberg, Charles

    2014-01-01

    Background Metabolic syndrome (MetS) refers to the clustering of cardio-metabolic risk factors including dyslipidemia, central adiposity, hypertension and hyperglycemia in individuals. Identification of pleiotropic genetic factors associated with MetS traits may shed light on key pathways or mediators underlying MetS. Methods and Results Using the Metabochip array in 15,148 African Americans (AA) from the PAGE Study, we identify susceptibility loci and investigate pleiotropy among genetic variants using a subset-based meta-analysis method, ASsociation-analysis-based-on-subSETs (ASSET). Unlike conventional models which lack power when associations for MetS components are null or have opposite effects, ASSET uses one-sided tests to detect positive and negative associations for components separately and combines tests accounting for correlations among components. With ASSET, we identify 27 SNPs in 1 glucose and 4 lipids loci (TCF7L2, LPL, APOA5, CETP, LPL, APOC1/APOE/TOMM40) significantly associated with MetS components overall, all P< 2.5e-7, the Bonferroni adjusted P-value. Three loci replicate in a Hispanic population, n=5172. A novel AA-specific variant, rs12721054/APOC1, and rs10096633/LPL are associated with ≥3 MetS components. We find additional evidence of pleiotropy for APOE, TOMM40, TCF7L2 and CETP variants, many with opposing effects; e.g. the same rs7901695/TCF7L2 allele is associated with increased odds of high glucose and decreased odds of central adiposity. Conclusions We highlight a method to increase power in large-scale genomic association analyses, and report a novel variant associated with all MetS components in AA. We also identify pleiotropic associations that may be clinically useful in patient risk profiling and for informing translational research of potential gene targets and medications. PMID:25023634

  5. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  7. Structural design methodologies for ceramic-based material systems

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Chulya, Abhisak; Gyekenyesi, John P.

    1991-01-01

    One of the primary pacing items for realizing the full potential of ceramic-based structural components is the development of new design methods and protocols. The focus here is on low temperature, fast-fracture analysis of monolithic, whisker-toughened, laminated, and woven ceramic composites. A number of design models and criteria are highlighted. Public domain computer algorithms, which aid engineers in predicting the fast-fracture reliability of structural components, are mentioned. Emphasis is not placed on evaluating the models, but instead is focused on the issues relevant to the current state of the art.

  8. The influence of physical factors on recognizing blood cells in the computer microscopy systems of acute leukemia diagnosis

    NASA Astrophysics Data System (ADS)

    Nikitaev, V. G.; Pronichev, A. N.; Polyakov, E. V.; Dmitrieva, V. V.; Tupitsyn, N. N.; Frenkel, M. A.; Mozhenkova, A. V.

    2017-01-01

    The work investigated the effect of the choice of color space component on blood cell detection based on the calculation of texture attributes of blood cells nuclei in bone marrow. The study identified the most informative color space and texture characteristics of blood cells, designed for components of these spaces. Significance ratio was introduced to assess the quality of features. We offered features that have enabled to divide lymphocytes from lymphoblasts. The selection of the features was based on the results of the data analysis.

  9. Identification and apportionment of hazardous elements in the sediments in the Yangtze River estuary.

    PubMed

    Wang, Jiawei; Liu, Ruimin; Wang, Haotian; Yu, Wenwen; Xu, Fei; Shen, Zhenyao

    2015-12-01

    In this study, positive matrix factorization (PMF) and principal components analysis (PCA) were combined to identify and apportion pollution-based sources of hazardous elements in the surface sediments in the Yangtze River estuary (YRE). Source identification analysis indicated that PC1, including Al, Fe, Mn, Cr, Ni, As, Cu, and Zn, can be defined as a sewage component; PC2, including Pb and Sb, can be considered as an atmospheric deposition component; and PC3, containing Cd and Hg, can be considered as an agricultural nonpoint component. To better identify the sources and quantitatively apportion the concentrations to their sources, eight sources were identified with PMF: agricultural/industrial sewage mixed (18.6 %), mining wastewater (15.9 %), agricultural fertilizer (14.5 %), atmospheric deposition (12.8 %), agricultural nonpoint (10.6 %), industrial wastewater (9.8 %), marine activity (9.0 %), and nickel plating industry (8.8 %). Overall, the hazardous element content seems to be more connected to anthropogenic activity instead of natural sources. The PCA results laid the foundation for the PMF analysis by providing a general classification of sources. PMF resolves more factors with a higher explained variance than PCA; PMF provided both the internal analysis and the quantitative analysis. The combination of the two methods can provide more reasonable and reliable results.

  10. Computational model for the analysis of cartilage and cartilage tissue constructs

    PubMed Central

    Smith, David W.; Gardiner, Bruce S.; Davidson, John B.; Grodzinsky, Alan J.

    2013-01-01

    We propose a new non-linear poroelastic model that is suited to the analysis of soft tissues. In this paper the model is tailored to the analysis of cartilage and the engineering design of cartilage constructs. The proposed continuum formulation of the governing equations enables the strain of the individual material components within the extracellular matrix (ECM) to be followed over time, as the individual material components are synthesized, assembled and incorporated within the ECM or lost through passive transport or degradation. The material component analysis developed here naturally captures the effect of time-dependent changes of ECM composition on the deformation and internal stress states of the ECM. For example, it is shown that increased synthesis of aggrecan by chondrocytes embedded within a decellularized cartilage matrix initially devoid of aggrecan results in osmotic expansion of the newly synthesized proteoglycan matrix and tension within the structural collagen network. Specifically, we predict that the collagen network experiences a tensile strain, with a maximum of ~2% at the fixed base of the cartilage. The analysis of an example problem demonstrates the temporal and spatial evolution of the stresses and strains in each component of a self-equilibrating composite tissue construct, and the role played by the flux of water through the tissue. PMID:23784936

  11. An Integrated Strategy to Qualitatively Differentiate Components of Raw and Processed Viticis Fructus Based on NIR, HPLC and UPLC-MS Analysis.

    PubMed

    Diao, Jiayin; Xu, Can; Zheng, Huiting; He, Siyi; Wang, Shumei

    2018-06-21

    Viticis Fructus is a traditional Chinese herbal drug processed by various methods to achieve different clinical purposes. Thermal treatment potentially alters chemical composition, which may impact on effectiveness and toxicity. In order to interpret the constituent discrepancies of raw versus processed (stir-fried) Viticis Fructus, a multivariate detection method (NIR, HPLC, and UPLC-MS) based on metabonomics and chemometrics was developed. Firstly, synergy interval partial least squares and partial least squares-discriminant analysis were employed to screen the distinctive wavebands (4319 - 5459 cm -1 ) based on preprocessed near-infrared spectra. Then, HPLC with principal component analysis was performed to characterize the distinction. Subsequently, a total of 49 compounds were identified by UPLC-MS, among which 42 compounds were eventually characterized as having a significant change during processing via the semiquantitative volcano plot analysis. Moreover, based on the partial least squares-discriminant analysis, 16 compounds were chosen as characteristic markers that could be in close correlation with the discriminatory near-infrared wavebands. Together, all of these characterization techniques effectively discriminated raw and processed products of Viticis Fructus. In general, our work provides an integrated way of classifying Viticis Fructus, and a strategy to explore discriminatory chemical markers for other traditional Chinese herbs, thus ensuring safety and efficacy for consumers. Georg Thieme Verlag KG Stuttgart · New York.

  12. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  13. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  14. Reaction dynamics analysis of a reconstituted Escherichia coli protein translation system by computational modeling

    PubMed Central

    Matsuura, Tomoaki; Tanimura, Naoki; Hosoda, Kazufumi; Yomo, Tetsuya; Shimizu, Yoshihiro

    2017-01-01

    To elucidate the dynamic features of a biologically relevant large-scale reaction network, we constructed a computational model of minimal protein synthesis consisting of 241 components and 968 reactions that synthesize the Met-Gly-Gly (MGG) peptide based on an Escherichia coli-based reconstituted in vitro protein synthesis system. We performed a simulation using parameters collected primarily from the literature and found that the rate of MGG peptide synthesis becomes nearly constant in minutes, thus achieving a steady state similar to experimental observations. In addition, concentration changes to 70% of the components, including intermediates, reached a plateau in a few minutes. However, the concentration change of each component exhibits several temporal plateaus, or a quasi-stationary state (QSS), before reaching the final plateau. To understand these complex dynamics, we focused on whether the components reached a QSS, mapped the arrangement of components in a QSS in the entire reaction network structure, and investigated time-dependent changes. We found that components in a QSS form clusters that grow over time but not in a linear fashion, and that this process involves the collapse and regrowth of clusters before the formation of a final large single cluster. These observations might commonly occur in other large-scale biological reaction networks. This developed analysis might be useful for understanding large-scale biological reactions by visualizing complex dynamics, thereby extracting the characteristics of the reaction network, including phase transitions. PMID:28167777

  15. A Discrepancy-Based Methodology for Nuclear Training Program Evaluation.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1991-01-01

    A three-phase comprehensive process for commercial nuclear power training program evaluation is presented. The discrepancy-based methodology was developed after the Three Mile Island nuclear reactor accident. It facilitates analysis of program components to identify discrepancies among program specifications, actual outcomes, and industry…

  16. The Development and Validation of the Online Shopping Addiction Scale.

    PubMed

    Zhao, Haiyan; Tian, Wei; Xin, Tao

    2017-01-01

    We report the development and validation of a scale to measure online shopping addiction. Inspired by previous theories and research on behavioral addiction, the Griffiths's widely accepted six-factor component model was referred to and an 18-item scale was constructed, with each component measured by three items. The results of exploratory factor analysis, based on Sample 1 (999 college students) and confirmatory factor analysis, based on Sample 2 (854 college students) showed the Griffiths's substantive six-factor structure underlay the online shopping addiction scale. Cronbach's alpha suggested that the resulting scale was highly reliable. Concurrent validity, based on Sample 3 (328 college students), was also satisfactory as indicated by correlations between the scale and measures of similar constructs. Finally, self-perceived online shopping addiction can be predicted to a relatively high degree. The present 18-item scale is a solid theory-based instrument to empirically measure online shopping addiction and can be used for understanding the phenomena among young adults.

  17. The Development and Validation of the Online Shopping Addiction Scale

    PubMed Central

    Zhao, Haiyan; Tian, Wei; Xin, Tao

    2017-01-01

    We report the development and validation of a scale to measure online shopping addiction. Inspired by previous theories and research on behavioral addiction, the Griffiths's widely accepted six-factor component model was referred to and an 18-item scale was constructed, with each component measured by three items. The results of exploratory factor analysis, based on Sample 1 (999 college students) and confirmatory factor analysis, based on Sample 2 (854 college students) showed the Griffiths's substantive six-factor structure underlay the online shopping addiction scale. Cronbach's alpha suggested that the resulting scale was highly reliable. Concurrent validity, based on Sample 3 (328 college students), was also satisfactory as indicated by correlations between the scale and measures of similar constructs. Finally, self-perceived online shopping addiction can be predicted to a relatively high degree. The present 18-item scale is a solid theory-based instrument to empirically measure online shopping addiction and can be used for understanding the phenomena among young adults. PMID:28559864

  18. Spatiotemporal patterns of ERP based on combined ICA-LORETA analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Guo, Taomei; Xu, Yaqin; Zhao, Xiaojie; Yao, Li

    2007-03-01

    In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and effective method to study brain dynamics from ERP.

  19. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  20. The Application of Structured Job Analysis Information Based on the Position Analysis Questionnaire (PAQ). Final Report No. 9.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.

    The Position Analysis Questionnaire (PAQ) is a job analysis instrument consisting of 187 job elements organized into six divisions. The PAQ was used in the eight studies summarized in this final report. The studies were: (1) ratings of the attribute requirements of PAQ job elements, (2) a series of principal components analyses of these attribute…

  1. Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-10-01

    Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.

  2. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  3. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  4. Relevance between SV and components based on water quality inspection by gas plumes

    NASA Astrophysics Data System (ADS)

    Nakanishi, A.; Aoyama, C.; Fukuoka, H.; Tajima, H.; Kumagai, H.; Takahashi, A.

    2017-12-01

    Gas and hydrate seeping from the seafloor into ocean water can be monitored on board, as images on echogram (acoustic equipment display inboard) by utilizing acoustic measurement equipment such as multi-beam sonars. Colors and shades of these images displayed on the monitor vary depending on the acoustic impedance. Backscattering strength (hereinafter referred as SV) depends on the type and density of plume components. Therefore, plume components should not be determined only by examining volume scattering density. By standardizing the relevance between gas plume SV and the components, types of plume components can be presumed just by calculating plume SV based on multi-beam data.Data from the following explorations will be utilized to perform the analysis of metal sensor, CTD measurement, and sampling. July, 2017 KAIYO-MARU2 (KAIYO ENGINEERING CO., LTD) @ Sea of Japan July, 2017 SIHNYO MARU (Tokyo University of Marine Science and Technology) @ Sea of Japan. And Chemical data obtained through YK16-07 cruise is also to be discussed.

  5. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  6. Short-term PV/T module temperature prediction based on PCA-RBF neural network

    NASA Astrophysics Data System (ADS)

    Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng

    2018-02-01

    Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.

  7. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  8. Three-Dimensional Kinematic Analysis of Prehension Movements in Young Children with Autism Spectrum Disorder: New Insights on Motor Impairment.

    PubMed

    Campione, Giovanna Cristina; Piazza, Caterina; Villa, Laura; Molteni, Massimo

    2016-06-01

    The study was aimed at better clarifying whether action execution impairment in autism depends mainly on disruptions either in feedforward mechanisms or in feedback-based control processes supporting motor execution. To this purpose, we analyzed prehension movement kinematics in 4- and 5-year-old children with autism and in peers with typical development. Statistical analysis showed that the kinematics of the grasp component was spared in autism, whereas early kinematics of the reach component was atypical. We discussed this evidence as suggesting impairment in the feedforward processes involved in action execution, whereas impairment in feedback-based control processes remained unclear. We proposed that certain motor abilities are available in autism, and children may use them differently as a function of motor context complexity.

  9. Digital Correction of Motion Artifacts in Microscopy Image Sequences Collected from Living Animals Using Rigid and Non-Rigid Registration

    PubMed Central

    Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2013-01-01

    Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443

  10. Component Cell-Based Restriction of Spectral Conditions and the Impact on CPV Module Power Rating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew T; Steiner, Marc; Siefer, Gerald

    One approach to consider the prevailing spectral conditions when performing CPV module power ratings according to the standard IEC 62670-3 is based on spectral matching ratios (SMRs) determined by the means of component cell sensors. In this work, an uncertainty analysis of the SMR approach is performed based on a dataset of spectral irradiances created with SMARTS2. Using these illumination spectra, the respective efficiencies of multijunction solar cells with different cell architectures are calculated. These efficiencies were used to analyze the influence of different component cell sensors and SMR filtering methods. The 3 main findings of this work are asmore » follows. First, component cells based on the lattice-matched triple-junction (LM3J) cell are suitable for restricting spectral conditions and are qualified for the standardized power rating of CPV modules - even if the CPV module is using multijunction cells other than LM3J. Second, a filtering of all 3 SMRs with +/-3.0% of unity results in the worst case scenario in an underestimation of -1.7% and overestimation of +2.4% compared to AM1.5d efficiency. Third, there is no benefit in matching the component cells to the module cell in respect to the measurement uncertainty.« less

  11. U.S. Geological Survey groundwater toolbox, a graphical and mapping interface for analysis of hydrologic data (version 1.0): user guide for estimation of base flow, runoff, and groundwater recharge from streamflow data

    USGS Publications Warehouse

    Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark

    2015-01-01

    This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.

  12. Detection of gear cracks in a complex gearbox of wind turbines using supervised bounded component analysis of vibration signals collected from multi-channel sensors

    NASA Astrophysics Data System (ADS)

    Li, Zhixiong; Yan, Xinping; Wang, Xuping; Peng, Zhongxiao

    2016-06-01

    In the complex gear transmission systems, in wind turbines a crack is one of the most common failure modes and can be fatal to the wind turbine power systems. A single sensor may suffer with issues relating to its installation position and direction, resulting in the collection of weak dynamic responses of the cracked gear. A multi-channel sensor system is hence applied in the signal acquisition and the blind source separation (BSS) technologies are employed to optimally process the information collected from multiple sensors. However, literature review finds that most of the BSS based fault detectors did not address the dependence/correlation between different moving components in the gear systems; particularly, the popular used independent component analysis (ICA) assumes mutual independence of different vibration sources. The fault detection performance may be significantly influenced by the dependence/correlation between vibration sources. In order to address this issue, this paper presents a new method based on the supervised order tracking bounded component analysis (SOTBCA) for gear crack detection in wind turbines. The bounded component analysis (BCA) is a state of art technology for dependent source separation and is applied limitedly to communication signals. To make it applicable for vibration analysis, in this work, the order tracking has been appropriately incorporated into the BCA framework to eliminate the noise and disturbance signal components. Then an autoregressive (AR) model built with prior knowledge about the crack fault is employed to supervise the reconstruction of the crack vibration source signature. The SOTBCA only outputs one source signal that has the closest distance with the AR model. Owing to the dependence tolerance ability of the BCA framework, interfering vibration sources that are dependent/correlated with the crack vibration source could be recognized by the SOTBCA, and hence, only useful fault information could be preserved in the reconstructed signal. The crack failure thus could be precisely identified by the cyclic spectral correlation analysis. A series of numerical simulations and experimental tests have been conducted to illustrate the advantages of the proposed SOTBCA method for fatigue crack detection. Comparisons to three representative techniques, i.e. Erdogan's BCA (E-BCA), joint approximate diagonalization of eigen-matrices (JADE), and FastICA, have demonstrated the effectiveness of the SOTBCA. Hence the proposed approach is suitable for accurate gear crack detection in practical applications.

  13. Performance-based seismic design of nonstructural building components: The next frontier of earthquake engineering

    NASA Astrophysics Data System (ADS)

    Filiatrault, Andre; Sullivan, Timothy

    2014-08-01

    With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major knowledge gaps that will need to be filled by future research. Furthermore, considering recent trends in earthquake engineering, the paper explores how performance-based seismic design might be conceived for nonstructural components, drawing on recent developments made in the field of seismic design and hinting at the specific considerations required for nonstructural components.

  14. Deepak Condenser Model (DeCoM)

    NASA Technical Reports Server (NTRS)

    Patel, Deepak

    2013-01-01

    Development of the DeCoM comes from the requirement of analyzing the performance of a condenser. A component of a loop heat pipe (LHP), the condenser, is interfaced with the radiator in order to reject heat. DeCoM simulates the condenser, with certain input parameters. Systems Improved Numerical Differencing Analyzer (SINDA), a thermal analysis software, calculates the adjoining component temperatures, based on the DeCoM parameters and interface temperatures to the radiator. Application of DeCoM is (at the time of this reporting) restricted to small-scale analysis, without the need for in-depth LHP component integrations. To efficiently develop a model to simulate the LHP condenser, DeCoM was developed to meet this purpose with least complexity. DeCoM is a single-condenser, single-pass simulator for analyzing its behavior. The analysis is done based on the interactions between condenser fluid, the wall, and the interface between the wall and the radiator. DeCoM is based on conservation of energy, two-phase equations, and flow equations. For two-phase, the Lockhart- Martinelli correlation has been used in order to calculate the convection value between fluid and wall. Software such as SINDA (for thermal analysis analysis) and Thermal Desktop (for modeling) are required. DeCoM also includes the ability to implement a condenser into a thermal model with the capability of understanding the code process and being edited to user-specific needs. DeCoM requires no license, and is an open-source code. Advantages to DeCoM include time dependency, reliability, and the ability for the user to view the code process and edit to their needs.

  15. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data.

    PubMed

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-08-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.

  16. DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data

    PubMed Central

    Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu

    2013-01-01

    High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089

  17. Pretreatment and integrated analysis of spectral data reveal seaweed similarities based on chemical diversity.

    PubMed

    Wei, Feifei; Ito, Kengo; Sakata, Kenji; Date, Yasuhiro; Kikuchi, Jun

    2015-03-03

    Extracting useful information from high dimensionality and large data sets is a major challenge for data-driven approaches. The present study was aimed at developing novel integrated analytical strategies for comprehensively characterizing seaweed similarities based on chemical diversity. The chemical compositions of 107 seaweed and 2 seagrass samples were analyzed using multiple techniques, including Fourier transform infrared (FT-IR) and solid- and solution-state nuclear magnetic resonance (NMR) spectroscopy, thermogravimetry-differential thermal analysis (TG-DTA), inductively coupled plasma-optical emission spectrometry (ICP-OES), CHNS/O total elemental analysis, and isotope ratio mass spectrometry (IR-MS). The spectral data were preprocessed using non-negative matrix factorization (NMF) and NMF combined with multivariate curve resolution-alternating least-squares (MCR-ALS) methods in order to separate individual component information from the overlapping and/or broad spectral peaks. Integrated analysis of the preprocessed chemical data demonstrated distinct discrimination of differential seaweed species. Further network analysis revealed a close correlation between the heavy metal elements and characteristic components of brown algae, such as cellulose, alginic acid, and sulfated mucopolysaccharides, providing a componential basis for its metal-sorbing potential. These results suggest that this integrated analytical strategy is useful for extracting and identifying the chemical characteristics of diverse seaweeds based on large chemical data sets, particularly complicated overlapping spectral data.

  18. Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis.

    PubMed

    Opriş, David; Pintea, Sebastian; García-Palacios, Azucena; Botella, Cristina; Szamosközi, Ştefan; David, Daniel

    2012-02-01

    Virtual reality exposure therapy (VRET) is a promising intervention for the treatment of the anxiety disorders. The main objective of this meta-analysis is to compare the efficacy of VRET, used in a behavioral or cognitive-behavioral framework, with that of the classical evidence-based treatments, in anxiety disorders. A comprehensive search of the literature identified 23 studies (n = 608) that were included in the final analysis. The results show that in the case of anxiety disorders, (1) VRET does far better than the waitlist control; (2) the post-treatment results show similar efficacy between the behavioral and the cognitive behavioral interventions incorporating a virtual reality exposure component and the classical evidence-based interventions, with no virtual reality exposure component; (3) VRET has a powerful real-life impact, similar to that of the classical evidence-based treatments; (4) VRET has a good stability of results over time, similar to that of the classical evidence-based treatments; (5) there is a dose-response relationship for VRET; and (6) there is no difference in the dropout rate between the virtual reality exposure and the in vivo exposure. Implications are discussed. © 2011 Wiley Periodicals, Inc.

  19. PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1994-01-01

    The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.

  20. Optical probe

    DOEpatents

    Hencken, Kenneth; Flower, William L.

    1999-01-01

    A compact optical probe is disclosed particularly useful for analysis of emissions in industrial environments. The instant invention provides a geometry for optically-based measurements that allows all optical components (source, detector, rely optics, etc.) to be located in proximity to one another. The geometry of the probe disclosed herein provides a means for making optical measurements in environments where it is difficult and/or expensive to gain access to the vicinity of a flow stream to be measured. Significantly, the lens geometry of the optical probe allows the analysis location within a flow stream being monitored to be moved while maintaining optical alignment of all components even when the optical probe is focused on a plurality of different analysis points within the flow stream.

  1. An analysis of satellite state vector observability using SST tracking data

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.; Hammond, C. L.

    1976-01-01

    Observability of satellite state vectors, using only SST tracking data was investigated by covariance analysis under a variety of satellite and station configurations. These results indicate very precarious observability in most short arc cases. The consequences of this are large variances on many state components, such as the downrange component of the relay satellite position. To illustrate the impact of observability problems, an example is given of two distinct satellite orbit pairs generating essentially the same data arc. The physical bases for unobservability are outlined and related to proposed TDRSS configurations. Results are relevant to any mission depending upon TDRSS to determine satellite state. The required mathematical analysis and the software used is described.

  2. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  3. Marine traffic model based on cellular automaton: Considering the change of the ship's velocity under the influence of the weather and sea

    NASA Astrophysics Data System (ADS)

    Qi, Le; Zheng, Zhongyi; Gang, Longhui

    2017-10-01

    It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.

  4. The early-type multiple system QZ Carinae

    NASA Astrophysics Data System (ADS)

    Mayer, P.; Lorenz, R.; Drechsel, H.; Abseim, A.

    2001-02-01

    We present an analysis of the early-type quadruple system QZ Car, consisting of an eclipsing and a non-eclipsing binary. The spectroscopic investigation is based on new high dispersion echelle and CAT/CES spectra of H and He lines. The elements for the orbit of the non-eclipsing pair could be refined. Lines of the brighter component of the eclipsing binary were detected in near-quadrature spectra, while signatures of the fainter component could be identified in only few spectra. Lines of the primary component of the non-eclipsing pair and of both components of the eclipsing pair were found to be variable in position and strength; in particular, the He ii 4686 emission line of the brighter eclipsing component is strongly variable. An ephemeris for the eclipsing binary QZ Car valid at present was derived Prim. Min. = hel. JD 2448687.16 + 5fd9991 * E. The relative orbit of the two binary constituents of the multiple system is discussed. In contrast to earlier investigations we found radial velocity changes of the systemic velocities of both binaries, which were used - together with an O-C analysis of the expected light-time effect - to derive approximate parameters of the mutual orbit of the two pairs. It is shown that this orbit and the distance to QZ Car can be further refined by minima timing and interferometry. Based on observations collected at the European Southern Observatory, La Silla, Chile.

  5. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  6. Residential expansion as a continental threat to U.S. coastal ecosystems

    Treesearch

    J.G. Bartlett; D.M. Mageean; R.J. O' Connor

    2000-01-01

    Spatially extensive analysis of satellite, climate, and census data reveals human-environment interactions of regional or continental concern in the United States. A grid-based principal components analysis of Bureau of Census variables revealed two independent demographic phenomena, a-settlement reflecting traditional human settlement patterns and p-settlement...

  7. Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.

    ERIC Educational Resources Information Center

    Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki

    2000-01-01

    Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)

  8. The Mediating Effects of Generative Cognition on Imagination Stimulation

    ERIC Educational Resources Information Center

    Hsu, Yuling; Liang, Chaoyun; Chang, Chi-Cheng

    2014-01-01

    This study, based in Taiwan, aims to explore what psychological factors influence imagination stimulation of education major students, and what the relationship is between these factors and imagination. Both principal component analysis and confirmatory factor analysis were employed to determine the most appropriate structure of the developed…

  9. Opportunities for Applied Behavior Analysis in the Total Quality Movement.

    ERIC Educational Resources Information Center

    Redmon, William K.

    1992-01-01

    This paper identifies critical components of recent organizational quality improvement programs and specifies how applied behavior analysis can contribute to quality technology. Statistical Process Control and Total Quality Management approaches are compared, and behavior analysts are urged to build their research base and market behavior change…

  10. A Review of Functional Analysis Methods Conducted in Public School Classroom Settings

    ERIC Educational Resources Information Center

    Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.

    2016-01-01

    The use of functional behavior assessments (FBAs) to address problem behavior in classroom settings has increased as a result of education legislation and long-standing evidence supporting function-based interventions. Although functional analysis remains the standard for identifying behavior--environment functional relations, this component is…

  11. Discrete Event Simulation of a Suppression of Enemy Air Defenses (SEAD) Mission

    DTIC Science & Technology

    2008-03-01

    component-based DES developed in Java® using the Simkit simulation package. Analysis of ship self air defense system selection ( Turan , 1999) is another...Institute of Technology, Wright-Patterson AFB OH, March 2003 (ADA445279 ) Turan , Bulent. A Comparative Analysis of Ship Self Air Defense (SSAD) Systems

  12. The Construction of Job Families Based on the Component and Overall Dimensions of the PAQ.

    ERIC Educational Resources Information Center

    Taylor, L. R.

    1978-01-01

    Seventy-six insurance company jobs were analyzed by 203 raters in an effort to assess the potential usefulness of the Position Analysis Questionnaire (PAQ) as a job analysis device to be employed in a more extensive, company-wide research program. (Editor/RK)

  13. A Preliminary Analysis of a Behavioral Classrooms Needs Assessment

    ERIC Educational Resources Information Center

    Leaf, Justin B.; Leaf, Ronald; McCray, Cynthia; Lamkins, Carol; Taubman, Mitchell; McEachin, John; Cihon, Joseph H.

    2016-01-01

    Today many special education classrooms implement procedures based upon the principles of Applied Behavior Analysis (ABA) to establish educationally relevant skills and decrease aberrant behaviors. However, it is difficult for school staff and consultants to evaluate the implementation of various components of ABA and general classroom set up. In…

  14. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  15. Proposed Models of Appropriate Website and Courseware for E-Learning in Higher Education: Research Based Design Models

    ERIC Educational Resources Information Center

    Khlaisang, Jintavee

    2010-01-01

    The purpose of this study was to investigate proper website and courseware for e-learning in higher education. Methods used in this study included the data collection, the analysis surveys, the experts' in-depth interview, and the experts' focus group. Results indicated that there were 16 components for website, as well as 16 components for…

  16. Electrostatics determine vibrational frequency shifts in hydrogen bonded complexes.

    PubMed

    Dey, Arghya; Mondal, Sohidul Islam; Sen, Saumik; Ghosh, Debashree; Patwari, G Naresh

    2014-12-14

    The red-shifts in the acetylenic C-H stretching vibration of C-H∙∙∙X (X = O, N) hydrogen-bonded complexes increase with an increase in the basicity of the Lewis base. Analysis of various components of stabilization energy suggests that the observed red-shifts are correlated with the electrostatic component of the stabilization energy, while the dispersion modulates the stabilization energy.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fast, J; Zhang, Q; Tilp, A

    Significantly improved returns in their aerosol chemistry data can be achieved via the development of a value-added product (VAP) of deriving OA components, called Organic Aerosol Components (OACOMP). OACOMP is primarily based on multivariate analysis of the measured organic mass spectral matrix. The key outputs of OACOMP are the concentration time series and the mass spectra of OA factors that are associated with distinct sources, formation and evolution processes, and physicochemical properties.

  18. Transcriptional pathway and de novo network-based approaches to effects-based monitoring in the Great Lakes

    EPA Science Inventory

    Transcriptomics provides unique solutions for understanding the impact of complex mixtures and their components on aquatic systems. Here we describe the application of transcriptomics analysis of in situ fathead minnow exposures for assessing biological impacts of wastewater trea...

  19. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  20. Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.

    PubMed

    Manceau, Alain; Marcus, Matthew; Lenoir, Thomas

    2014-09-01

    Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.

Top